Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation

ilustraciones, gráficas

Autores:
Rodriguez Caicedo, Alvaro Dario
Tipo de recurso:
Fecha de publicación:
2022
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
eng
OAI Identifier:
oai:repositorio.unal.edu.co:unal/82449
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/82449
https://repositorio.unal.edu.co/
Palabra clave:
000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación
Computadores
Procesamiento de la información
Computers
Information processing
Open-ended Code Generation
ML Interpretability
Language Models
Autoregressive Models
Neural Networks
Interpretabilidad de aprendizaje automático
Generación No-Condicionada de Código
Modelos de Lenguaje
Modelos Autoregresivos
Redes Neuronales
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
id UNACIONAL2_1bebff23f33c3f2db8ac2fd5dd5e16c4
oai_identifier_str oai:repositorio.unal.edu.co:unal/82449
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.eng.fl_str_mv Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
dc.title.translated.spa.fl_str_mv Entendiendo e implementando redes neuronales profundas para la generación no condicionada de código fuente
title Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
spellingShingle Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación
Computadores
Procesamiento de la información
Computers
Information processing
Open-ended Code Generation
ML Interpretability
Language Models
Autoregressive Models
Neural Networks
Interpretabilidad de aprendizaje automático
Generación No-Condicionada de Código
Modelos de Lenguaje
Modelos Autoregresivos
Redes Neuronales
title_short Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
title_full Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
title_fullStr Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
title_full_unstemmed Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
title_sort Understanding and Implementing Deep Neural Networks for Unconditional Source Code Generation
dc.creator.fl_str_mv Rodriguez Caicedo, Alvaro Dario
dc.contributor.advisor.none.fl_str_mv Gómez Perdomo, Jonatan (Thesis advisor)
Nader Palacio, David Alberto (Thesis co-advisor)
dc.contributor.author.none.fl_str_mv Rodriguez Caicedo, Alvaro Dario
dc.contributor.researchgroup.spa.fl_str_mv Alife: Grupo de Investigación en Vida Artificial
dc.subject.ddc.spa.fl_str_mv 000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación
topic 000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación
Computadores
Procesamiento de la información
Computers
Information processing
Open-ended Code Generation
ML Interpretability
Language Models
Autoregressive Models
Neural Networks
Interpretabilidad de aprendizaje automático
Generación No-Condicionada de Código
Modelos de Lenguaje
Modelos Autoregresivos
Redes Neuronales
dc.subject.lemb.spa.fl_str_mv Computadores
Procesamiento de la información
dc.subject.lemb.eng.fl_str_mv Computers
Information processing
dc.subject.proposal.eng.fl_str_mv Open-ended Code Generation
ML Interpretability
Language Models
Autoregressive Models
Neural Networks
dc.subject.proposal.spa.fl_str_mv Interpretabilidad de aprendizaje automático
Generación No-Condicionada de Código
Modelos de Lenguaje
Modelos Autoregresivos
Redes Neuronales
description ilustraciones, gráficas
publishDate 2022
dc.date.accessioned.none.fl_str_mv 2022-10-25T15:11:16Z
dc.date.available.none.fl_str_mv 2022-10-25T15:11:16Z
dc.date.issued.none.fl_str_mv 2022-07-15
dc.type.spa.fl_str_mv Trabajo de grado - Maestría
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/masterThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TM
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/82449
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/82449
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv eng
language eng
dc.relation.indexed.spa.fl_str_mv RedCol
LaReferencia
dc.relation.references.spa.fl_str_mv [1] Karan Aggarwal, Mohammad Salameh, and Abram Hindle. Using Machine Translation for Converting Python 2 to Python 3 Code. Tech. rep. Oct. 2015. doi: 10 . 7287 / peerj . preprints . 1459v1. url: https://dx.doi.org/10.7287/peerj.preprints.1459v1
[2] Vahid Alizadeh and Marouane Kessentini. “Reducing interactiverefactoring effort via clustering-based multi-objective search”. In: ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated SoftwarNew York, NY, USA: Association for Computing Machinery, Inc, Sept. 2018, pp. 464–474. isbn: 9781450359375. doi: 10 . 1145 / 3238147 . 3238217. url: https://dl.acm.org/doi/10.1145/3238147.3238217
[3] Miltiadis Allamanis et al. “A survey of machine learning for big code and natural- ness”. In: arXiv 1414172 (2017), pp. 1–36. issn: 23318422
[4] Miltiadis Allamanis et al. Learning Natural Coding Conventions. Tech. rep. url: https://dl.acm.org/doi/10.1145/2635868.2635883
[5] Uri Alon et al. “code2vec: Learning distributed representations of code”. In: arXiv (2018). issn: 23318422. doi: 10.1145/3290353
[6] Pavol Bielik, Veselin Raychev, and Martin Vechev. “PHOG: Probabilistic model for code”. In: 33rd International Conference on Machine Learning, ICML 2016 6 (2016), pp. 4311–4323.
[7] Pierre Bourque and Richard E. Fairley. Guide to the Software Engineering Body of Knowledge SW Tech. rep. IEEE Computer Society, 2014.
[8] Lutz Buch and Artur Andrzejak. “Learning-Based Recursive Aggregation of Abstract Syntax Trees for Code Clone Detection”. In: SANER 2019 - Proceedings of the 2019 IEEE 26th International Conference on Software Analysis (Mar. 2019), pp. 95–104. doi: 10.1109/SANER.2019.8668039. 103 BIBLIOGRAPHY 104
[9] Nikhil Buduma and Nicholas Locascio. Fundamentals of deep learning : designing next-generation Ed. by Mike Loukides and Shannon Cutt. First. O’Reilly Media, Inc, 2017. isbn: 1491925612. url: https://books.google.com/books/about/Fundamentals_of_Deep_Learning.html?id=EZFfrgEACAAJ
[10] Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardoso. “Machine Learning Interpretability: A Survey on Methods and Metrics”. In: Electronics 8.8 (July 2019), p. 832. issn: 2079-9292. doi: 10.3390/electronics8080832. url: https://www.mdpi.com/2079-9292/8/8/832.
[11] Chaofan Chen et al. “This Looks Like That: Deep Learning for Interpretable Image Recognition”. In: arXiv (June 2018). url: http://arxiv.org/abs/1806.10574
[12] Chunyang Chen et al. “From UI design image to GUI skeleton: A neural machine translator to bootstrap mobile GUI implementation”. In: Proceedings - International Conference on Software Engineering 6 (2018), pp. 665–676. issn: 02705257. doi: 10.1145/3180155.3180240
[13] Jianbo Chen et al. “Learning to Explain: An Information-Theoretic Perspective on Model Interpretation”. In: 35th International Conference on Machine Learning, ICML 2018 2 (Feb. 2018), pp. 1386–1418. url: http://arxiv.org/abs/1802.07814
[14] Mark Chen et al. “Evaluating Large Language Models Trained on Code”. In: (July 2021). doi: 10.48550/arxiv.2107.03374. url: https://arxiv.org/abs/2107.03374v2
[15] Zimin Chen and Martin Monperrus. “A Literature Study of Embeddings on Source Code”. In: (Apr. 2019). url: https://arxiv.org/abs/1904.03061v1
[16] Kyunghyun Cho et al. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In: (Jun. 2014). url: https://arxiv.org/abs/1406.1078
[17] Thomas M. Cover and Joy A. Thomas. “Elements of Information Theory”. In: Elements of Information Theory (Apr. 2005), pp. 1–748. doi:10.1002/047174882X. url: https://onlinelibrary.wiley.com/doi/book/10.1002/047174882X.
[18] Juan Cruz-Benito et al. "Automated Source Code Generation and Auto-completion Using Deep Learning: Comparing and Discussing Current Language-Model-Related Approaches". In: (Sep 2020) url: https://arxiv.org/abs/2009.07740
[19] Shiyong Cui and Mihai Datcu. “Comparison of Kullback-Leibler divergence approximation methods between Gaussian mixture models for satellite image retrieval”. https://ieeexplore.ieee.org/abstract/document/7326631/.
[20] Hoa Khanh Dam, Truyen Tran, and Aditya Ghose. “Explainable Software Analytics”. In: Proceedings - International Conference on Software Engineering (Feb. 2018), pp. 53–56. url: http://arxiv.org/abs/1802.00603 BIBLIOGRAPHY 105
[21] Jacob Devlin et al. “Semantic Code Repair using Neuro-Symbolic Transformation Networks”. In: (Oct. 2017). doi:10 .48550/arxiv.1710 . 11054. url: https://arxiv.org/abs/1710.11054v1
[22] Amit Dhurandhar et al. “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives”. In: Advances in Neural Information Processing Systems 2018-Decem (Feb. 2018), pp. 592–603. url: http://arxiv.org/abs/1802.07623
[23] Finale Doshi-Velez and Been Kim. "Towards A Rigorous Science of Interpretable Machine Learning". Tech. rep. url: https://arxiv.org/abs/1702.08608
[24] Angela Fan, Mike Lewis, and Yann Dauphin. “Hierarchical Neural Story Generation”. In: ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Pro 1 (May 2018), pp. 889–898. doi:10.48550/arxiv . 1805 . 04833. url: https ://arxiv.org/abs/1805.04833v1
[25] Jessica Ficler and Yoav Goldberg. “Controlling Linguistic Style Aspects in Neural Language Generation”. In: (Jul. 2017). url: https://arxiv.org/abs/1707.02633
[26] Ruth Fong and Andrea Vedaldi. “Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks”. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Jan. 2018), pp. 8730–8738. url: http://arxiv.org/abs/1801.03454
[27] Xiaodong Gu et al. “Deep API Learning”. In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering 13-18-Nove (May 2016), pp. 631–642. url: http://arxiv.org/abs/1605.08535
[28] Sumit Gulwani, Oleksandr Polozov, and Rishabh Singh. Program Synthesis. now publishers, 2017. isbn: 9781680832921. url: https://www.nowpublishers.com/article/Details/PGL-010
[29] Tihomir Gvero and Viktor Kuncak. “Synthesizing Java expressions from free-form queries”. In: OOPSLA '87: Conference proceedings on Object-oriented programming systems, languages and applications 25-30-Oct- (2015), pp. 416–432. doi: 10.1145/2814270.2814295. url: https://dl.acm.org/doi/10.1145/2858965.2814295
[30] M Harman et al. “Achievements, open problems and challenges for search based software testing”. In: ieeexplore.ieee.org . url: https://ieeexplore.ieee.org/abstract/document/7102580/
[31] Simon Haykin et al. Neural Networks and Learning Machines Third Edition. 2009. isbn: 9780131471399.
[32] R Hellebrand et al. “Coevolution of variability models and code: An industrial case study”. In: ACM International Conference Proceeding Series 1 (2014), pp. 274–283. doi: 10.1145/2648511.2648542. url: https://dl.acm.org/doi/10.1145/2648511.2648542. BIBLIOGRAPHY 106
[33] Vincent J. Hellendoorn and Premkumar Devanbu. “Are deep neural networks the best choice for modeling source code?” In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering Part F1301 (2017), pp. 763–773. doi: 10.1145/3106237.3106290. url: https://dl.acm.org/doi/10.1145/3106237.3106290
[34] John R. Hershey and Peder A. Olsen. “Approximating the Kullback Leibler divergence between Gaussian mixture models”. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceeding 4 (2007). issn: 15206149. doi: 10.1109/ICASSP.2007.366913. url: https://ieeexplore.ieee.org/document/4218101
[35] Abram Hindle et al. “On the naturalness of software”. In: Proceedings - International Conference on Software Engineering June 2014 (2012), pp. 837–847. issn: 02705257. doi: 10.1109/ICSE.2012.6227135. url: https://dl.acm.org/doi/10.5555/2337223.2337322
[36] Sepp Hochreiter and Jürgen Schmidhuber. “Long Short-Term Memory”. In: Neural Computation 9.8 (Nov. 1997), pp. 1735–1780. issn: 08997667. doi:10.1162/NECO.1997.9.8.1735.
[37] Ari Holtzman et al. “The Curious Case of Neural Text Degeneration”. In: CEUR Workshop Proceedings 2540 (Apr. 2019). issn: 16130073. doi: 10.48550/arxiv.1904.09751. url: https://arxiv.org/abs/1904.09751v2
[38] Jirayus Jiarpakdee. “Towards a more reliable interpretation of defect models”. In: Proceedings - 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Institute of Electrical and Electronics Engineers Inc., May 2019, pp. 210–213. isbn: 9781728117645. doi: 10.1109/ICSE-Companion.2019.00084.
[39] Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. “AutoSpearman: Automatically Mitigating Correlated Metrics for Interpreting Defect Models”. In: Proceedings - 2018 IEEE International Conference on Software Maintenance and Evolution, ICSM (June 2018), pp. 92–103. url: http://arxiv.org/abs/1806.09791
[40] Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. “The impact of automated feature selection techniques on the interpretation of defect models”. In: Empirical Software Engineering 25.5 (Sept. 2020), pp. 3590–3638. issn: 15737616. doi: 10.1007/s10664-020-09848-1. url: https://link.springer.com/article/10.1007/s10664-020-09848-1
[41] Jirayus Jiarpakdee et al. “An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models”. In: IEEE Transactions on Software Engineering (Mar. 2020), pp. 1–1. issn: 0098-5589. doi: 10.1109/tse.2020.2982385. url: https://ieeexplore.ieee.org/document/9044387/
[42] Dan Jurafsky and James H Martin. "Speech and Language Processing, An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition" 3rd. 2019. url: https://web.stanford.edu/~jurafsky/slp3/. BIBLIOGRAPHY 107
[43] Rafael Michael Karampatsis et al. “Big code != big vocabulary: Open-vocabulary models for source code”. In: Proceedings - International Conference on Software Engineering (2020), pp. 1073–1085. issn: 02705257. doi: 10.1145/3377811.3380342. url: https://arxiv.org/abs/2003.07914
[44] Anjan Karmakar and Romain Robbes. “What do pre-trained code models know about code?” In: (Aug. 2021), pp. 1332–1336. doi: 10.48550/arxiv.2108.11308. url: https://arxiv.org/abs/2108.11308v1
[45] Andrej Karpathy, Justin Johnson, and Li Fei-Fei. "Visualizing And Understanding Recurrent Networks". Tech. rep. In: (Jun. 2015) url: https://arxiv.org/abs/1506.02078
[46] Taghi M. Khoshgoftaar and Edward B. Allen. “Applications of information theory to software engineering measurement”. In: Software Quality Journal 1994 3:2 3.2 (June 1994), pp. 79–103. issn: 1573-1367. doi: 10.1007/BF00213632. url: https://link.springer.com/article/10.1007/BF00213632
[47] Joe Kilian and Hava T. Siegelmann. “The Dynamic Universality of Sigmoidal Neural Networks”. In: Information and Computation 128.1 (July 1996), pp. 48–56. issn: 0890-5401. doi: 10.1006/INCO.1996.0062.
[48] Been Kim, Rajiv Khanna, and Oluwasanmi Koyejo. "Examples are not Enough, Learn to Criticize! Criticism for Interpretability". Tech. rep. 2016. url: https://dl.acm.org/doi/10.5555/3157096.3157352
[49] Been Kim et al. “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)”. In: 35th International Conference on Machine Learning, ICML 2018 6 (Nov. 2017), pp. 4186–4195. url: http://arxiv.org/abs/1711.11279
[50] Quoc V. Le and Tomas Mikolov. “Distributed Representations of Sentences and Documents”. In: 31st International Conference on Machine Learning, ICML 2014 4 (May 2014), pp. 2931–2939. url: http://arxiv.org/abs/1405.4053
[51] Series Editor Richard Leblanc et al. “Software Metrics : A Rigorous and Practical Approach, Third Edition”. In: (Oct. 2014). doi: 10.1201/B17461. url: https://www.taylorfrancis.com/books/mono/10.1201/b17461/software-metrics-norman-fenton-james-bieman.
[52] Y Li et al. “A multi-objective and cost-aware optimization of requirements assignment for review”. In: 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings. School of Computer Science and Engineering, Beihang University, Beijing, China, 2017, pp. 89–96. doi: 10.1109/CEC.2017.7969300. url: https://ieeexplore.ieee.org/document/7969300
[53] Jianhua Lin. “Divergence Measures Based on the Shannon Entropy”. In: IEEE Transactions on Information Theory 37.1 (1991), pp. 145–151. issn: 15579654. doi: 10.1109/18.61115.
[54] Peter J. Liu et al. “Generating Wikipedia by Summarizing Long Sequences”. In: 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings (Jan. 2018). doi: 10.48550/arxiv.1801.10198. url: https://arxiv.org/abs/1801.10198v1
[55] Scott Lundberg and Su-In Lee. “A Unified Approach to Interpreting Model Predictions”. In: Advances in Neural Information Processing Systems 2017-Decem (May 2017), pp. 4766–4775. url: http://arxiv.org/abs/1705.07874
[56] Thainá Mariani and Silvia Regina Vergilio. “A systematic review on search-based refactoring”. In: Information and Software Technology 83 (Mar. 2017), pp. 14–34. issn: 09505849. doi: 10.1016/j.infsof.2016.11.009
[57] Stephen Marsland. Machine Learning: An Algorithmic Perspective, Second Edition - 2nd Edit.Chapman and Hall/CRC, 2014. url: https://www.routledge.com/Machine-Learning-An-Algorithmic-Perspective-Second-Edition/Marsland/p/book/9781466583283
[58] R. Thomas McCoy et al. “How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN”. In: (Nov. 2021). url: https://arxiv.org/abs/2111.09509v1
[59] Leland McInnes, John Healy, and James Melville. “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction”. In: (Feb. 2018). doi: 10.48550/arxiv.1802.03426. url: https://arxiv.org/abs/1802.03426v3
[60] Tomas Mikolov et al. “Efficient Estimation of Word Representations in Vector Space”. In: 1st International Conference on Learning Representations, ICLR 2013 - Workshop Track (Jan. 2013). doi: 10.48550/arxiv.1301.3781. url: https://arxiv.org/abs/1301.3781v3
[61] Christoph Molnar. "Interpretable Machine Learning". 1st. Lulu (eBook), 2019. url: https://christophm.github.io/interpretable-ml-book/
[62] Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. “Methods for Interpreting and Understanding Deep Neural Networks”. In: Digital Signal Processing: A Review Journal 73 (June 2017), pp. 1–15. doi:10.1016/j.dsp.2017.10.011. url: https://arxiv.org/abs/1706.07979
[63] David Nader and Jonatan Gómez. “A Computational Solution for the Software Refactoring Problem: From a Formalism Toward an Optimization Approach”. url: https://repositorio.unal.edu.co/handle/unal/62057
[64] An Nguyen. “Language Model Evaluation in Open-ended Text Generation”. In:(Aug. 2021). doi: 10.48550/arxiv.2108.03578. url: https://arxiv.org/abs/2108.03578v1
[65] Tung Thanh Nguyen et al. “A statistical semantic language model for source code”. In: 2013 9th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOF New York, New York, USA: ACM Press, 2013, pp. 532–542. isbn: 9781450322379. doi: 10.1145/2491411.2491458. url: https://dl.acm.org/doi/10.1145/2491411.2491458
[66] Koichi Odajima et al. “Greedy rule generation from discrete data and its use in neural network rule extraction”. In: Neural Networks 21.7 (Sept. 2008), pp. 1020–1028. issn: 08936080. doi:10.1016/j. neunet.2008.01.003. url: https://pubmed.ncbi.nlm.nih.gov/18442894/
[67] Alec Radford Openai et al. “Improving Language Understanding by Generative Pre-Training”. url: https://openai.com/blog/language-unsupervised/.
[68] Ali Ouni et al. “MORE: A multi-objective refactoring recommendation approach to introducing design patterns and fixing code smells”. In: Journal of Software: Evolution and Process 29.5 (May 2017). issn: 20477481. doi:10.1002/smr.1843
[69] Josh Patterson and Adam Gibson. "Deep Learning A Practitioner’s Approach". 2017. url: https://books.google.com.co/books?id=BdPrrQEACAAJ.
[70] Tejaswini Pedapati et al. “Learning Global Transparent Models Consistent with Local Contrastive Explanations”. In: (Feb. 2020). url: http://arxiv.org/abs/2002.08247
[71] Miltiadis Allamanis et al. “A Survey of Machine Learning for Big Code and Naturalness”. url: https://arxiv.org/abs/1709.06182
[72] Krishna Pillutla et al. “MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers”. In: (Feb. 2021). url: https://arxiv.org/abs/2102.01454v3
[73] Gregory Plumb et al. “Regularizing Black-box Models for Improved Interpretability”. In: arXiv (Feb. 2019). url: http://arxiv.org/abs/1902.06787
[74] Yewen Pu et al. “sk p: a neural program corrector for MOOCs”. In: SPLASH Companion 2016 - Companion Proceedings of the 2016 ACM SIGPLAN International Conference (July 2016), pp. 39–40. url: http://arxiv.org/abs/1607.02902
[75] Alec Radford et al. “Language Models are Unsupervised Multitask Learners”. url: https://openai.com/blog/better-language-models/
[76] Karthikeyan Natesan Ramamurthy et al. “Model Agnostic Multilevel Explanations”. In: arXiv (Mar. 2020). url: http://arxiv.org/abs/2003.06005
[77] Shuo Ren et al. “CodeBLEU: a Method for Automatic Evaluation of Code Synthesis”. In: (Sept. 2020). url: https://arxiv.org/abs/2009.10297v2
[78] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data M Vol. 13-17-August. Association for Computing Machinery, Aug. 2016, pp. 1135– 1144. isbn: 9781450342322. doi: 10 . 1145 / 2939672 . 2939778. url: http://dx.doi.org/10.1145/2939672.2939778
[79] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Anchors: High-Precision Model-Agnostic Explanations”. In: undefined (2018). url: https://ojs.aaai.org/index.php/AAAI/article/view/11491
[80] Peter J. Rousseeuw. “Silhouettes: A graphical aid to the interpretation and validation of cluster analysis”. In: Journal of Computational and Applied Mathematics 20.C (Nov. 1987), pp. 53–65. issn: 0377-0427. doi: 10.1016/0377-0427(87)901257
[81] Rico Sennrich, Barry Haddow, and Alexandra Birch. “Neural Machine Translation of Rare Words with Subword Units”. In: 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers 3 (Aug. 2015), pp. 1715–1725. url: http://arxiv.org/abs/1508.07909
[82] Rudy Setiono and Huan Liu. "Understanding Neural Networks via Rule Extraction". Tech. rep. url: https://dl.acm.org/doi/10.5555/1625855.1625918
[83] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. “Learning Important Features Through Propagating Activation Differences”. In: 34th International Conference on Machine Learning, ICML 2017 7 (Apr. 2017), pp. 4844–4866. url: http://arxiv.org/abs/1704.02685
[84] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps”. In: 2nd International Conference on Learning Representations, ICLR 2014 - Workshop Track Proceed (Dec. 2013). url: http://arxiv.org/abs/1312.6034
[85] Armando Solar-Lezama. “Program Synthesis by Sketching”. Ph.D. thesis. 2008.
[86] Erik Strumbelj and Igor Kononenko. "An Efficient Explanation of Individual Classifications using Game Theory". Tech. rep. 2010, pp. 1–18. doi:10.5555/1756006.1756007. url: https://dl.acm.org/doi/10.5555/1756006.1756007
[87] Chakkrit Tantithamthavorn, Ahmed E. Hassan, and Kenichi Matsumoto. “The Impact of Class Rebalancing Techniques on the Performance and Interpretation of Defect Prediction Models”. In: IEEE Transactions on Software Engineering 46.11 (Nov. 2018), pp. 1200–1219. issn: 19393520. doi: 10.1109/TSE.2018.2876537. url: https://ieeexplore.ieee.org/document/8494821/
[88] Chakkrit Tantithamthavorn, Jirayus Jiarpakdee, and John Grundy. "Explainable AI for Software Engineering". Tech. rep. 2019. url: https://ieeexplore.ieee.org/document/9678580
[89] Jake VanderPlas. Python Data Science Handbook. O’Reilly Media, Inc, 2017. url: https://jakevdp.github.io/PythonDataScienceHandbook/
[90] Ashish Vaswani et al. “Attention is all you need”. In: Advances in Neural Information Processing Systems. Vol. 2017-Decem. Neural information processing systems foundation, June 2017, pp. 5999–6009. url: https://arxiv.org/abs/1706.03762v5
[91] Richard J. Waldinger and Richard C. T Lee. “PROW: a step toward automatic program writing”. In: IJCAI’69: Proceedings of the 1st international joint conference on Artificial intelligence (1969). url: https://dl.acm.org/doi/10.5555/1624562.1624586
[92] Gazzola, Micucci, and Mariani “Automatic Program Repair Techniques: A Survey ”. In:Jisuanji Xuebao/Chinese Journal of Computers 41.3 (2018), pp. 588–610. doi:10.11897/SP.J.1016.2018.00588. url: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8089448
[93] Cody Watson et al. “A systematic literature review on the use of deep learning in Software Engineering Research”. In: arXiv (2020). issn: 23318422. url: https://arxiv.org/abs/2009.06520
[94] Supatsara Wattanakriengkrai et al. "Predicting Defective Lines Using a Model-Agnostic Technique". Tech. rep. url: https://ieeexplore.ieee.org/document/9193975
[95] Ethan Weinberger, Joseph Janizek, and Su-In Lee. “Learning Deep Attribution Priors Based On Prior Knowledge”. In: arXiv (Dec. 2019). url: http://arxiv.org/abs/1912.10065
[96] Maksymilian Wojtas and Ke Chen. “Feature Importance Ranking for Deep Learning”. In: (Oct. 2020). url: http://arxiv.org/abs/2010.08973
[97] Mike Wu et al. “Beyond Sparsity: Tree Regularization of Deep Models for Interpretability”. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (Nov. 2017), pp. 1670–1678. url: http://arxiv.org/abs/1711.06178
[98] Chih-Kuan Yeh et al. “Representer Point Selection for Explaining Deep Neural Networks”. In: Advances in Neural Information Processing Systems 2018-Decem (Nov. 2018), pp. 9291–9301. url: http://arxiv.org/abs/1811.09720
[99] Pengcheng Yin and Graham Neubig. “A Syntactic Neural Model for General-Purpose Code Generation”. In: ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings o 1 (Apr. 2017), pp. 440–450. url: http://arxiv.org/abs/1704.01696
[100] Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. “Interpretable Convolutional Neural Networks”. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Oct. 2017), pp. 8827–8836. url: http://arxiv.org/abs/1710.00935
[101] Yu Zhang et al. “A Survey on Neural Network Interpretability”. In: arXiv (Dec.2020). url: http://arxiv.org/abs/2012.14261
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial 4.0 Internacional
http://creativecommons.org/licenses/by-nc/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv xi, 112 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.publisher.faculty.spa.fl_str_mv Facultad de Ingeniería
dc.publisher.place.spa.fl_str_mv Bogotá, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Bogotá
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/82449/1/license.txt
https://repositorio.unal.edu.co/bitstream/unal/82449/2/1019099124.2022.pdf
bitstream.checksum.fl_str_mv eb34b1cf90b7e1103fc9dfd26be24b4a
b9131dd0539e6614f7a11c410ec9a6cd
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1806886133709144064
spelling Atribución-NoComercial 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Gómez Perdomo, Jonatan (Thesis advisor)f5b12a1f33e4f80f2f647b22bf161ea4600Nader Palacio, David Alberto (Thesis co-advisor)bf0ec06adf5d6b30bb46cccd07e19940Rodriguez Caicedo, Alvaro Dario40498166b5e028c422c2d2cbb408d3f4Alife: Grupo de Investigación en Vida Artificial2022-10-25T15:11:16Z2022-10-25T15:11:16Z2022-07-15https://repositorio.unal.edu.co/handle/unal/82449Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/ilustraciones, gráficasCode Generation is a relevant problem in computer science, supporting the automation of tasks such as code completion, program synthesis, and program translation. In recent years, Deep Learning approaches have gained popularity in the code generation problem, and some of these approaches leverage Language Models. However, the existing studies mainly focus on evaluation using machine learning metrics. Additionally, the generation process can be classified into conditional or unconditional (i.e., open-ended) approaches depending on the input context provided to the models. This research proposes CodeGenXplainer, a suite of interpretability methods for Unconditional Language Models of source code. CodeGenXplainer comprises four methods leveraging multiple source code features such as embedding representations, code metrics, compilation errors, and token distributions. Additionally, this research presents an empirical study to validate CodeGenXplainer using publicly available data and extensive sampling of code snippets. Furthermore, CodeGenXplainer provides a base conceptual framework that allows studying multiple complementary perspectives based on machine-generated code. Results show that the studied models can generate code exhibiting similar properties to human code, particularly in terms of code metrics, compilation errors, and token-level information; nonetheless, machine-generated code presents issues with the semantic elements of the code. (Texto tomado de la fuente)La generación de código es un problema relevante en ciencias de la computación, que soporta la automatización de tareas como completado de código, síntesis y traducción de programas. En los últimos años, los enfoques de aprendizaje profundo han ganado popularidad en el problema de generación de código y algunos de estos enfoques están basados en modelos de lenguaje. Sin embargo, los estudios existentes se centran principalmente en la evaluación utilizando métricas de aprendizaje automático. Adicionalmente, el proceso de generación se puede clasificar en enfoques condicionales o incondicionales (es decir, open-ended) según el contexto de entrada proporcionado a los modelos. Esta investigación propone CodeGenXplainer, un conjunto de métodos de interpretabilidad para modelos de lenguaje no condicionados de código fuente. CodeGenXplainer comprende cuatro métodos que aprovechan múltiples características de código fuente, como representaciones abstractas, métricas de código, errores de compilación y distribuciones de tokens. Además, esta investigación presenta un estudio empírico para validar CodeGenXplainer utilizando datos disponibles públicamente y muestreo extensivo de fragmentos de código. Por otra parte, CodeGenXplainer proporciona un marco conceptual base que permite estudiar múltiples perspectivas complementarias basadas en código generado por máquina. Los resultados muestran que los modelos estudiados pueden generar código que exhibe propiedades similares al código humano, particularmente en términos de métricas de código, errores de compilación e información a nivel de token; no obstante, el código generado por máquina presenta problemas con los elementos semánticos del código.MaestríaMagíster en Ingeniería - Ingeniería de Sistemas y ComputaciónSistemas inteligentesIngeniería de softwarexi, 112 páginasapplication/pdfengUniversidad Nacional de ColombiaBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y ComputaciónFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computaciónComputadoresProcesamiento de la informaciónComputersInformation processingOpen-ended Code GenerationML InterpretabilityLanguage ModelsAutoregressive ModelsNeural NetworksInterpretabilidad de aprendizaje automáticoGeneración No-Condicionada de CódigoModelos de LenguajeModelos AutoregresivosRedes NeuronalesUnderstanding and Implementing Deep Neural Networks for Unconditional Source Code GenerationEntendiendo e implementando redes neuronales profundas para la generación no condicionada de código fuenteTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMRedColLaReferencia[1] Karan Aggarwal, Mohammad Salameh, and Abram Hindle. Using Machine Translation for Converting Python 2 to Python 3 Code. Tech. rep. Oct. 2015. doi: 10 . 7287 / peerj . preprints . 1459v1. url: https://dx.doi.org/10.7287/peerj.preprints.1459v1[2] Vahid Alizadeh and Marouane Kessentini. “Reducing interactiverefactoring effort via clustering-based multi-objective search”. In: ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated SoftwarNew York, NY, USA: Association for Computing Machinery, Inc, Sept. 2018, pp. 464–474. isbn: 9781450359375. doi: 10 . 1145 / 3238147 . 3238217. url: https://dl.acm.org/doi/10.1145/3238147.3238217[3] Miltiadis Allamanis et al. “A survey of machine learning for big code and natural- ness”. In: arXiv 1414172 (2017), pp. 1–36. issn: 23318422[4] Miltiadis Allamanis et al. Learning Natural Coding Conventions. Tech. rep. url: https://dl.acm.org/doi/10.1145/2635868.2635883[5] Uri Alon et al. “code2vec: Learning distributed representations of code”. In: arXiv (2018). issn: 23318422. doi: 10.1145/3290353[6] Pavol Bielik, Veselin Raychev, and Martin Vechev. “PHOG: Probabilistic model for code”. In: 33rd International Conference on Machine Learning, ICML 2016 6 (2016), pp. 4311–4323.[7] Pierre Bourque and Richard E. Fairley. Guide to the Software Engineering Body of Knowledge SW Tech. rep. IEEE Computer Society, 2014.[8] Lutz Buch and Artur Andrzejak. “Learning-Based Recursive Aggregation of Abstract Syntax Trees for Code Clone Detection”. In: SANER 2019 - Proceedings of the 2019 IEEE 26th International Conference on Software Analysis (Mar. 2019), pp. 95–104. doi: 10.1109/SANER.2019.8668039. 103 BIBLIOGRAPHY 104[9] Nikhil Buduma and Nicholas Locascio. Fundamentals of deep learning : designing next-generation Ed. by Mike Loukides and Shannon Cutt. First. O’Reilly Media, Inc, 2017. isbn: 1491925612. url: https://books.google.com/books/about/Fundamentals_of_Deep_Learning.html?id=EZFfrgEACAAJ[10] Diogo V. Carvalho, Eduardo M. Pereira, and Jaime S. Cardoso. “Machine Learning Interpretability: A Survey on Methods and Metrics”. In: Electronics 8.8 (July 2019), p. 832. issn: 2079-9292. doi: 10.3390/electronics8080832. url: https://www.mdpi.com/2079-9292/8/8/832.[11] Chaofan Chen et al. “This Looks Like That: Deep Learning for Interpretable Image Recognition”. In: arXiv (June 2018). url: http://arxiv.org/abs/1806.10574[12] Chunyang Chen et al. “From UI design image to GUI skeleton: A neural machine translator to bootstrap mobile GUI implementation”. In: Proceedings - International Conference on Software Engineering 6 (2018), pp. 665–676. issn: 02705257. doi: 10.1145/3180155.3180240[13] Jianbo Chen et al. “Learning to Explain: An Information-Theoretic Perspective on Model Interpretation”. In: 35th International Conference on Machine Learning, ICML 2018 2 (Feb. 2018), pp. 1386–1418. url: http://arxiv.org/abs/1802.07814[14] Mark Chen et al. “Evaluating Large Language Models Trained on Code”. In: (July 2021). doi: 10.48550/arxiv.2107.03374. url: https://arxiv.org/abs/2107.03374v2[15] Zimin Chen and Martin Monperrus. “A Literature Study of Embeddings on Source Code”. In: (Apr. 2019). url: https://arxiv.org/abs/1904.03061v1[16] Kyunghyun Cho et al. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In: (Jun. 2014). url: https://arxiv.org/abs/1406.1078[17] Thomas M. Cover and Joy A. Thomas. “Elements of Information Theory”. In: Elements of Information Theory (Apr. 2005), pp. 1–748. doi:10.1002/047174882X. url: https://onlinelibrary.wiley.com/doi/book/10.1002/047174882X.[18] Juan Cruz-Benito et al. "Automated Source Code Generation and Auto-completion Using Deep Learning: Comparing and Discussing Current Language-Model-Related Approaches". In: (Sep 2020) url: https://arxiv.org/abs/2009.07740[19] Shiyong Cui and Mihai Datcu. “Comparison of Kullback-Leibler divergence approximation methods between Gaussian mixture models for satellite image retrieval”. https://ieeexplore.ieee.org/abstract/document/7326631/.[20] Hoa Khanh Dam, Truyen Tran, and Aditya Ghose. “Explainable Software Analytics”. In: Proceedings - International Conference on Software Engineering (Feb. 2018), pp. 53–56. url: http://arxiv.org/abs/1802.00603 BIBLIOGRAPHY 105[21] Jacob Devlin et al. “Semantic Code Repair using Neuro-Symbolic Transformation Networks”. In: (Oct. 2017). doi:10 .48550/arxiv.1710 . 11054. url: https://arxiv.org/abs/1710.11054v1[22] Amit Dhurandhar et al. “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives”. In: Advances in Neural Information Processing Systems 2018-Decem (Feb. 2018), pp. 592–603. url: http://arxiv.org/abs/1802.07623[23] Finale Doshi-Velez and Been Kim. "Towards A Rigorous Science of Interpretable Machine Learning". Tech. rep. url: https://arxiv.org/abs/1702.08608[24] Angela Fan, Mike Lewis, and Yann Dauphin. “Hierarchical Neural Story Generation”. In: ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Pro 1 (May 2018), pp. 889–898. doi:10.48550/arxiv . 1805 . 04833. url: https ://arxiv.org/abs/1805.04833v1[25] Jessica Ficler and Yoav Goldberg. “Controlling Linguistic Style Aspects in Neural Language Generation”. In: (Jul. 2017). url: https://arxiv.org/abs/1707.02633[26] Ruth Fong and Andrea Vedaldi. “Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks”. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Jan. 2018), pp. 8730–8738. url: http://arxiv.org/abs/1801.03454[27] Xiaodong Gu et al. “Deep API Learning”. In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering 13-18-Nove (May 2016), pp. 631–642. url: http://arxiv.org/abs/1605.08535[28] Sumit Gulwani, Oleksandr Polozov, and Rishabh Singh. Program Synthesis. now publishers, 2017. isbn: 9781680832921. url: https://www.nowpublishers.com/article/Details/PGL-010[29] Tihomir Gvero and Viktor Kuncak. “Synthesizing Java expressions from free-form queries”. In: OOPSLA '87: Conference proceedings on Object-oriented programming systems, languages and applications 25-30-Oct- (2015), pp. 416–432. doi: 10.1145/2814270.2814295. url: https://dl.acm.org/doi/10.1145/2858965.2814295[30] M Harman et al. “Achievements, open problems and challenges for search based software testing”. In: ieeexplore.ieee.org . url: https://ieeexplore.ieee.org/abstract/document/7102580/[31] Simon Haykin et al. Neural Networks and Learning Machines Third Edition. 2009. isbn: 9780131471399.[32] R Hellebrand et al. “Coevolution of variability models and code: An industrial case study”. In: ACM International Conference Proceeding Series 1 (2014), pp. 274–283. doi: 10.1145/2648511.2648542. url: https://dl.acm.org/doi/10.1145/2648511.2648542. BIBLIOGRAPHY 106[33] Vincent J. Hellendoorn and Premkumar Devanbu. “Are deep neural networks the best choice for modeling source code?” In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering Part F1301 (2017), pp. 763–773. doi: 10.1145/3106237.3106290. url: https://dl.acm.org/doi/10.1145/3106237.3106290[34] John R. Hershey and Peder A. Olsen. “Approximating the Kullback Leibler divergence between Gaussian mixture models”. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceeding 4 (2007). issn: 15206149. doi: 10.1109/ICASSP.2007.366913. url: https://ieeexplore.ieee.org/document/4218101[35] Abram Hindle et al. “On the naturalness of software”. In: Proceedings - International Conference on Software Engineering June 2014 (2012), pp. 837–847. issn: 02705257. doi: 10.1109/ICSE.2012.6227135. url: https://dl.acm.org/doi/10.5555/2337223.2337322[36] Sepp Hochreiter and Jürgen Schmidhuber. “Long Short-Term Memory”. In: Neural Computation 9.8 (Nov. 1997), pp. 1735–1780. issn: 08997667. doi:10.1162/NECO.1997.9.8.1735.[37] Ari Holtzman et al. “The Curious Case of Neural Text Degeneration”. In: CEUR Workshop Proceedings 2540 (Apr. 2019). issn: 16130073. doi: 10.48550/arxiv.1904.09751. url: https://arxiv.org/abs/1904.09751v2[38] Jirayus Jiarpakdee. “Towards a more reliable interpretation of defect models”. In: Proceedings - 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Institute of Electrical and Electronics Engineers Inc., May 2019, pp. 210–213. isbn: 9781728117645. doi: 10.1109/ICSE-Companion.2019.00084.[39] Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. “AutoSpearman: Automatically Mitigating Correlated Metrics for Interpreting Defect Models”. In: Proceedings - 2018 IEEE International Conference on Software Maintenance and Evolution, ICSM (June 2018), pp. 92–103. url: http://arxiv.org/abs/1806.09791[40] Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. “The impact of automated feature selection techniques on the interpretation of defect models”. In: Empirical Software Engineering 25.5 (Sept. 2020), pp. 3590–3638. issn: 15737616. doi: 10.1007/s10664-020-09848-1. url: https://link.springer.com/article/10.1007/s10664-020-09848-1[41] Jirayus Jiarpakdee et al. “An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models”. In: IEEE Transactions on Software Engineering (Mar. 2020), pp. 1–1. issn: 0098-5589. doi: 10.1109/tse.2020.2982385. url: https://ieeexplore.ieee.org/document/9044387/[42] Dan Jurafsky and James H Martin. "Speech and Language Processing, An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition" 3rd. 2019. url: https://web.stanford.edu/~jurafsky/slp3/. BIBLIOGRAPHY 107[43] Rafael Michael Karampatsis et al. “Big code != big vocabulary: Open-vocabulary models for source code”. In: Proceedings - International Conference on Software Engineering (2020), pp. 1073–1085. issn: 02705257. doi: 10.1145/3377811.3380342. url: https://arxiv.org/abs/2003.07914[44] Anjan Karmakar and Romain Robbes. “What do pre-trained code models know about code?” In: (Aug. 2021), pp. 1332–1336. doi: 10.48550/arxiv.2108.11308. url: https://arxiv.org/abs/2108.11308v1[45] Andrej Karpathy, Justin Johnson, and Li Fei-Fei. "Visualizing And Understanding Recurrent Networks". Tech. rep. In: (Jun. 2015) url: https://arxiv.org/abs/1506.02078[46] Taghi M. Khoshgoftaar and Edward B. Allen. “Applications of information theory to software engineering measurement”. In: Software Quality Journal 1994 3:2 3.2 (June 1994), pp. 79–103. issn: 1573-1367. doi: 10.1007/BF00213632. url: https://link.springer.com/article/10.1007/BF00213632[47] Joe Kilian and Hava T. Siegelmann. “The Dynamic Universality of Sigmoidal Neural Networks”. In: Information and Computation 128.1 (July 1996), pp. 48–56. issn: 0890-5401. doi: 10.1006/INCO.1996.0062.[48] Been Kim, Rajiv Khanna, and Oluwasanmi Koyejo. "Examples are not Enough, Learn to Criticize! Criticism for Interpretability". Tech. rep. 2016. url: https://dl.acm.org/doi/10.5555/3157096.3157352[49] Been Kim et al. “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)”. In: 35th International Conference on Machine Learning, ICML 2018 6 (Nov. 2017), pp. 4186–4195. url: http://arxiv.org/abs/1711.11279[50] Quoc V. Le and Tomas Mikolov. “Distributed Representations of Sentences and Documents”. In: 31st International Conference on Machine Learning, ICML 2014 4 (May 2014), pp. 2931–2939. url: http://arxiv.org/abs/1405.4053[51] Series Editor Richard Leblanc et al. “Software Metrics : A Rigorous and Practical Approach, Third Edition”. In: (Oct. 2014). doi: 10.1201/B17461. url: https://www.taylorfrancis.com/books/mono/10.1201/b17461/software-metrics-norman-fenton-james-bieman.[52] Y Li et al. “A multi-objective and cost-aware optimization of requirements assignment for review”. In: 2017 IEEE Congress on Evolutionary Computation, CEC 2017 - Proceedings. School of Computer Science and Engineering, Beihang University, Beijing, China, 2017, pp. 89–96. doi: 10.1109/CEC.2017.7969300. url: https://ieeexplore.ieee.org/document/7969300[53] Jianhua Lin. “Divergence Measures Based on the Shannon Entropy”. In: IEEE Transactions on Information Theory 37.1 (1991), pp. 145–151. issn: 15579654. doi: 10.1109/18.61115.[54] Peter J. Liu et al. “Generating Wikipedia by Summarizing Long Sequences”. In: 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings (Jan. 2018). doi: 10.48550/arxiv.1801.10198. url: https://arxiv.org/abs/1801.10198v1[55] Scott Lundberg and Su-In Lee. “A Unified Approach to Interpreting Model Predictions”. In: Advances in Neural Information Processing Systems 2017-Decem (May 2017), pp. 4766–4775. url: http://arxiv.org/abs/1705.07874[56] Thainá Mariani and Silvia Regina Vergilio. “A systematic review on search-based refactoring”. In: Information and Software Technology 83 (Mar. 2017), pp. 14–34. issn: 09505849. doi: 10.1016/j.infsof.2016.11.009[57] Stephen Marsland. Machine Learning: An Algorithmic Perspective, Second Edition - 2nd Edit.Chapman and Hall/CRC, 2014. url: https://www.routledge.com/Machine-Learning-An-Algorithmic-Perspective-Second-Edition/Marsland/p/book/9781466583283[58] R. Thomas McCoy et al. “How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN”. In: (Nov. 2021). url: https://arxiv.org/abs/2111.09509v1[59] Leland McInnes, John Healy, and James Melville. “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction”. In: (Feb. 2018). doi: 10.48550/arxiv.1802.03426. url: https://arxiv.org/abs/1802.03426v3[60] Tomas Mikolov et al. “Efficient Estimation of Word Representations in Vector Space”. In: 1st International Conference on Learning Representations, ICLR 2013 - Workshop Track (Jan. 2013). doi: 10.48550/arxiv.1301.3781. url: https://arxiv.org/abs/1301.3781v3[61] Christoph Molnar. "Interpretable Machine Learning". 1st. Lulu (eBook), 2019. url: https://christophm.github.io/interpretable-ml-book/[62] Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. “Methods for Interpreting and Understanding Deep Neural Networks”. In: Digital Signal Processing: A Review Journal 73 (June 2017), pp. 1–15. doi:10.1016/j.dsp.2017.10.011. url: https://arxiv.org/abs/1706.07979[63] David Nader and Jonatan Gómez. “A Computational Solution for the Software Refactoring Problem: From a Formalism Toward an Optimization Approach”. url: https://repositorio.unal.edu.co/handle/unal/62057[64] An Nguyen. “Language Model Evaluation in Open-ended Text Generation”. In:(Aug. 2021). doi: 10.48550/arxiv.2108.03578. url: https://arxiv.org/abs/2108.03578v1[65] Tung Thanh Nguyen et al. “A statistical semantic language model for source code”. In: 2013 9th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOF New York, New York, USA: ACM Press, 2013, pp. 532–542. isbn: 9781450322379. doi: 10.1145/2491411.2491458. url: https://dl.acm.org/doi/10.1145/2491411.2491458[66] Koichi Odajima et al. “Greedy rule generation from discrete data and its use in neural network rule extraction”. In: Neural Networks 21.7 (Sept. 2008), pp. 1020–1028. issn: 08936080. doi:10.1016/j. neunet.2008.01.003. url: https://pubmed.ncbi.nlm.nih.gov/18442894/[67] Alec Radford Openai et al. “Improving Language Understanding by Generative Pre-Training”. url: https://openai.com/blog/language-unsupervised/.[68] Ali Ouni et al. “MORE: A multi-objective refactoring recommendation approach to introducing design patterns and fixing code smells”. In: Journal of Software: Evolution and Process 29.5 (May 2017). issn: 20477481. doi:10.1002/smr.1843[69] Josh Patterson and Adam Gibson. "Deep Learning A Practitioner’s Approach". 2017. url: https://books.google.com.co/books?id=BdPrrQEACAAJ.[70] Tejaswini Pedapati et al. “Learning Global Transparent Models Consistent with Local Contrastive Explanations”. In: (Feb. 2020). url: http://arxiv.org/abs/2002.08247[71] Miltiadis Allamanis et al. “A Survey of Machine Learning for Big Code and Naturalness”. url: https://arxiv.org/abs/1709.06182[72] Krishna Pillutla et al. “MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers”. In: (Feb. 2021). url: https://arxiv.org/abs/2102.01454v3[73] Gregory Plumb et al. “Regularizing Black-box Models for Improved Interpretability”. In: arXiv (Feb. 2019). url: http://arxiv.org/abs/1902.06787[74] Yewen Pu et al. “sk p: a neural program corrector for MOOCs”. In: SPLASH Companion 2016 - Companion Proceedings of the 2016 ACM SIGPLAN International Conference (July 2016), pp. 39–40. url: http://arxiv.org/abs/1607.02902[75] Alec Radford et al. “Language Models are Unsupervised Multitask Learners”. url: https://openai.com/blog/better-language-models/[76] Karthikeyan Natesan Ramamurthy et al. “Model Agnostic Multilevel Explanations”. In: arXiv (Mar. 2020). url: http://arxiv.org/abs/2003.06005[77] Shuo Ren et al. “CodeBLEU: a Method for Automatic Evaluation of Code Synthesis”. In: (Sept. 2020). url: https://arxiv.org/abs/2009.10297v2[78] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why should I trust you?” Explaining the predictions of any classifier”. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data M Vol. 13-17-August. Association for Computing Machinery, Aug. 2016, pp. 1135– 1144. isbn: 9781450342322. doi: 10 . 1145 / 2939672 . 2939778. url: http://dx.doi.org/10.1145/2939672.2939778[79] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Anchors: High-Precision Model-Agnostic Explanations”. In: undefined (2018). url: https://ojs.aaai.org/index.php/AAAI/article/view/11491[80] Peter J. Rousseeuw. “Silhouettes: A graphical aid to the interpretation and validation of cluster analysis”. In: Journal of Computational and Applied Mathematics 20.C (Nov. 1987), pp. 53–65. issn: 0377-0427. doi: 10.1016/0377-0427(87)901257[81] Rico Sennrich, Barry Haddow, and Alexandra Birch. “Neural Machine Translation of Rare Words with Subword Units”. In: 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers 3 (Aug. 2015), pp. 1715–1725. url: http://arxiv.org/abs/1508.07909[82] Rudy Setiono and Huan Liu. "Understanding Neural Networks via Rule Extraction". Tech. rep. url: https://dl.acm.org/doi/10.5555/1625855.1625918[83] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. “Learning Important Features Through Propagating Activation Differences”. In: 34th International Conference on Machine Learning, ICML 2017 7 (Apr. 2017), pp. 4844–4866. url: http://arxiv.org/abs/1704.02685[84] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps”. In: 2nd International Conference on Learning Representations, ICLR 2014 - Workshop Track Proceed (Dec. 2013). url: http://arxiv.org/abs/1312.6034[85] Armando Solar-Lezama. “Program Synthesis by Sketching”. Ph.D. thesis. 2008.[86] Erik Strumbelj and Igor Kononenko. "An Efficient Explanation of Individual Classifications using Game Theory". Tech. rep. 2010, pp. 1–18. doi:10.5555/1756006.1756007. url: https://dl.acm.org/doi/10.5555/1756006.1756007[87] Chakkrit Tantithamthavorn, Ahmed E. Hassan, and Kenichi Matsumoto. “The Impact of Class Rebalancing Techniques on the Performance and Interpretation of Defect Prediction Models”. In: IEEE Transactions on Software Engineering 46.11 (Nov. 2018), pp. 1200–1219. issn: 19393520. doi: 10.1109/TSE.2018.2876537. url: https://ieeexplore.ieee.org/document/8494821/[88] Chakkrit Tantithamthavorn, Jirayus Jiarpakdee, and John Grundy. "Explainable AI for Software Engineering". Tech. rep. 2019. url: https://ieeexplore.ieee.org/document/9678580[89] Jake VanderPlas. Python Data Science Handbook. O’Reilly Media, Inc, 2017. url: https://jakevdp.github.io/PythonDataScienceHandbook/[90] Ashish Vaswani et al. “Attention is all you need”. In: Advances in Neural Information Processing Systems. Vol. 2017-Decem. Neural information processing systems foundation, June 2017, pp. 5999–6009. url: https://arxiv.org/abs/1706.03762v5[91] Richard J. Waldinger and Richard C. T Lee. “PROW: a step toward automatic program writing”. In: IJCAI’69: Proceedings of the 1st international joint conference on Artificial intelligence (1969). url: https://dl.acm.org/doi/10.5555/1624562.1624586[92] Gazzola, Micucci, and Mariani “Automatic Program Repair Techniques: A Survey ”. In:Jisuanji Xuebao/Chinese Journal of Computers 41.3 (2018), pp. 588–610. doi:10.11897/SP.J.1016.2018.00588. url: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8089448[93] Cody Watson et al. “A systematic literature review on the use of deep learning in Software Engineering Research”. In: arXiv (2020). issn: 23318422. url: https://arxiv.org/abs/2009.06520[94] Supatsara Wattanakriengkrai et al. "Predicting Defective Lines Using a Model-Agnostic Technique". Tech. rep. url: https://ieeexplore.ieee.org/document/9193975[95] Ethan Weinberger, Joseph Janizek, and Su-In Lee. “Learning Deep Attribution Priors Based On Prior Knowledge”. In: arXiv (Dec. 2019). url: http://arxiv.org/abs/1912.10065[96] Maksymilian Wojtas and Ke Chen. “Feature Importance Ranking for Deep Learning”. In: (Oct. 2020). url: http://arxiv.org/abs/2010.08973[97] Mike Wu et al. “Beyond Sparsity: Tree Regularization of Deep Models for Interpretability”. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (Nov. 2017), pp. 1670–1678. url: http://arxiv.org/abs/1711.06178[98] Chih-Kuan Yeh et al. “Representer Point Selection for Explaining Deep Neural Networks”. In: Advances in Neural Information Processing Systems 2018-Decem (Nov. 2018), pp. 9291–9301. url: http://arxiv.org/abs/1811.09720[99] Pengcheng Yin and Graham Neubig. “A Syntactic Neural Model for General-Purpose Code Generation”. In: ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings o 1 (Apr. 2017), pp. 440–450. url: http://arxiv.org/abs/1704.01696[100] Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. “Interpretable Convolutional Neural Networks”. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Oct. 2017), pp. 8827–8836. url: http://arxiv.org/abs/1710.00935[101] Yu Zhang et al. “A Survey on Neural Network Interpretability”. In: arXiv (Dec.2020). url: http://arxiv.org/abs/2012.14261Understanding and Implementing Deep Neural Networks for Unconditional Source Code GenerationEstudiantesInvestigadoresLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/82449/1/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD51ORIGINAL1019099124.2022.pdf1019099124.2022.pdfTesis de Maestría en Ingeniería de Sistemas y Computaciónapplication/pdf5834584https://repositorio.unal.edu.co/bitstream/unal/82449/2/1019099124.2022.pdfb9131dd0539e6614f7a11c410ec9a6cdMD52unal/82449oai:repositorio.unal.edu.co:unal/824492022-10-25 10:12:58.701Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo=