Multi-view learning for hierarchical topic detection on corpus of documents
diagramas, ilustraciones a color, tablas
- Autores:
-
Calero Espinosa, Juan Camilo
- Tipo de recurso:
- Fecha de publicación:
- 2021
- Institución:
- Universidad Nacional de Colombia
- Repositorio:
- Universidad Nacional de Colombia
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.unal.edu.co:unal/79567
- Palabra clave:
- 000 - Ciencias de la computación, información y obras generales
Named entities
Topic detection
Multi-view clustering
Multi-view learning
Graph fusion
Entidades nombradas
Aprendizaje multi-vista
Agrupamiento multi-vista
Fusión de grafos
Indexación automática
Recuperación de información
Information processing
Automatic indexing
- Rights
- openAccess
- License
- Reconocimiento 4.0 Internacional
id |
UNACIONAL2_c384b5f298185e5f2a5be1a6b53b108b |
---|---|
oai_identifier_str |
oai:repositorio.unal.edu.co:unal/79567 |
network_acronym_str |
UNACIONAL2 |
network_name_str |
Universidad Nacional de Colombia |
repository_id_str |
|
dc.title.eng.fl_str_mv |
Multi-view learning for hierarchical topic detection on corpus of documents |
dc.title.translated.spa.fl_str_mv |
Aprendizaje multi-vista para la detección jerárquica de temas en corpus de documentos |
title |
Multi-view learning for hierarchical topic detection on corpus of documents |
spellingShingle |
Multi-view learning for hierarchical topic detection on corpus of documents 000 - Ciencias de la computación, información y obras generales Named entities Topic detection Multi-view clustering Multi-view learning Graph fusion Entidades nombradas Aprendizaje multi-vista Agrupamiento multi-vista Fusión de grafos Indexación automática Recuperación de información Information processing Automatic indexing |
title_short |
Multi-view learning for hierarchical topic detection on corpus of documents |
title_full |
Multi-view learning for hierarchical topic detection on corpus of documents |
title_fullStr |
Multi-view learning for hierarchical topic detection on corpus of documents |
title_full_unstemmed |
Multi-view learning for hierarchical topic detection on corpus of documents |
title_sort |
Multi-view learning for hierarchical topic detection on corpus of documents |
dc.creator.fl_str_mv |
Calero Espinosa, Juan Camilo |
dc.contributor.advisor.none.fl_str_mv |
Niño Vasquez, Luis Fernando |
dc.contributor.author.none.fl_str_mv |
Calero Espinosa, Juan Camilo |
dc.contributor.researchgroup.spa.fl_str_mv |
LABORATORIO DE INVESTIGACIÓN EN SISTEMAS INTELIGENTES - LISI |
dc.subject.ddc.spa.fl_str_mv |
000 - Ciencias de la computación, información y obras generales |
topic |
000 - Ciencias de la computación, información y obras generales Named entities Topic detection Multi-view clustering Multi-view learning Graph fusion Entidades nombradas Aprendizaje multi-vista Agrupamiento multi-vista Fusión de grafos Indexación automática Recuperación de información Information processing Automatic indexing |
dc.subject.proposal.eng.fl_str_mv |
Named entities Topic detection Multi-view clustering Multi-view learning Graph fusion |
dc.subject.proposal.spa.fl_str_mv |
Entidades nombradas Aprendizaje multi-vista Agrupamiento multi-vista Fusión de grafos |
dc.subject.unesco.none.fl_str_mv |
Indexación automática Recuperación de información Information processing Automatic indexing |
description |
diagramas, ilustraciones a color, tablas |
publishDate |
2021 |
dc.date.accessioned.none.fl_str_mv |
2021-05-26T16:54:28Z |
dc.date.available.none.fl_str_mv |
2021-05-26T16:54:28Z |
dc.date.issued.none.fl_str_mv |
2021 |
dc.type.spa.fl_str_mv |
Trabajo de grado - Maestría |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/masterThesis |
dc.type.version.spa.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.content.spa.fl_str_mv |
Text |
dc.type.redcol.spa.fl_str_mv |
http://purl.org/redcol/resource_type/TM |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://repositorio.unal.edu.co/handle/unal/79567 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.identifier.reponame.spa.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
dc.identifier.repourl.spa.fl_str_mv |
https://repositorio.unal.edu.co/ |
url |
https://repositorio.unal.edu.co/handle/unal/79567 https://repositorio.unal.edu.co/ |
identifier_str_mv |
Universidad Nacional de Colombia Repositorio Institucional Universidad Nacional de Colombia |
dc.language.iso.spa.fl_str_mv |
eng |
language |
eng |
dc.relation.references.spa.fl_str_mv |
Stephen E. Palmer. “Hierarchical structure in perceptual representation”. In: Cogni- tive Psychology 9.4 (Oct. 1977), pp. 441–474. issn: 0010-0285. doi: 10.1016/0010- 0285(77)90016-0. url: https://www.sciencedirect.com/science/article/pii/ 0010028577900160. E. Wachsmuth, M. W. Oram, and D. I. Perrett. “Recognition of Objects and Their Component Parts: Responses of Single Units in the Temporal Cortex of the Macaque”. In: Cerebral Cortex 4.5 (Sept. 1994), pp. 509–522. issn: 1047-3211. doi: 10.1093/ cercor/4.5.509. url: https://academic.oup.com/cercor/article-lookup/doi/ 10.1093/cercor/4.5.509. N K Logothetis and D L Sheinberg. “Visual Object Recognition”. In: Annual Review of Neuroscience 19.1 (Mar. 1996), pp. 577–621. issn: 0147-006X. doi: 10.1146/annurev. ne . 19 . 030196 . 003045. url: http : / / www . annualreviews . org / doi / 10 . 1146 / annurev.ne.19.030196.003045. Daniel D. Lee and H. Sebastian Seung. “Learning the parts of objects by non-negative matrix factorization”. In: Nature 401.6755 (Oct. 1999), pp. 788–791. issn: 00280836. doi: 10.1038/44565. url: http://www.nature.com/articles/44565. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. “Latent Dirichlet Allocation”. In: Journal of Machine Learning Research 3.Jan (2003), pp. 993–1022. issn: ISSN 1533-7928. url: http://www.jmlr.org/papers/v3/blei03a.html. Thomas L. Griffiths et al. “Hierarchical Topic Models and the Nested Chinese Restau- rant Process”. In: Advances in Neural Information Processing Systems (2003), pp. 17– 24. url: https://papers.nips.cc/paper/2466- hierarchical- topic- models- and-the-nested-chinese%20-restaurant-process.pdf. Stella X. Yu and Jianbo Shi. “Multiclass spectral clustering”. In: Proceedings of the IEEE International Conference on Computer Vision. Vol. 1. Institute of Electrical and Electronics Engineers Inc., 2003, pp. 313–319. doi: 10.1109/iccv.2003.1238361. url: https://ieeexplore.ieee.org/abstract/document/1238361. S. Bickel and T. Scheffer. “Multi-View Clustering”. In: Fourth IEEE International Conference on Data Mining (ICDM’04). IEEE, 2004, pp. 19–26. isbn: 0-7695-2142-8. doi: 10.1109/ICDM.2004.10095. url: http://ieeexplore.ieee.org/document/ 1410262/.74 Bibliography Nevin L Zhang and Lzhang@cs Ust Hk. Hierarchical Latent Class Models for Cluster Analysis. Tech. rep. 2004, pp. 697–723. url: https : / / www . jmlr . org / papers / volume5/zhang04a/zhang04a.pdf. David Newman, Chaitanya Chemudugunta, and Padhraic Smyth. “Statistical entity- topic models”. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Vol. 2006. Association for Computing Ma- chinery, 2006, pp. 680–686. isbn: 1595933395. doi: 10.1145/1150402.1150487. Li Wei and Andrew McCallum. “Pachinko allocation: DAG-structured mixture models of topic correlations”. In: ACM International Conference Proceeding Series. Vol. 148. 2006, pp. 577–584. isbn: 1595933832. doi: 10.1145/1143844.1143917. url: https: //dl.acm.org/doi/abs/10.1145/1143844.1143917. David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. “The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies”. In: (Oct. 2007). url: https://arxiv.org/abs/0710.0845. David Mimno, Wei Li, and Andrew McCallum. “Mixtures of hierarchical topics with Pachinko allocation”. In: ACM International Conference Proceeding Series. Vol. 227. 2007, pp. 633–640. doi: 10.1145/1273496.1273576. url: https://dl.acm.org/ doi/abs/10.1145/1273496.1273576. Tomoaki Nakamura, Takayuki Nagai, and Naoto Iwahashi. “Multimodal object cat- egorization by a robot”. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Oct. 2007, pp. 2415–2420. isbn: 978-1-4244-0911-2. doi: 10 . 1109 / IROS . 2007 . 4399634. url: http : / / ieeexplore . ieee . org / document / 4399634/. Chaitanya Chemudugunta et al. “Modeling Documents by Combining Semantic Con- cepts with Unsupervised Statistical Learning”. In: 2008, pp. 229–244. doi: 10.1007/ 978-3-540-88564-1{\_}15. Yi Wang, Nevin L. Zhang, and Tao Chen. “Latent tree models and approximate in- ference in Bayesian networks”. In: Journal of Artificial Intelligence Research 32 (Aug. 2008), pp. 879–900. issn: 10769757. doi: 10.1613/jair.2530. url: https://www. jair.org/index.php/jair/article/view/10564. Nevin L. Zhang et al. “Latent tree models and diagnosis in traditional Chinese medicine”. In: Artificial Intelligence in Medicine 42.3 (Mar. 2008), pp. 229–245. issn: 09333657. doi: 10.1016/j.artmed.2007.10.004. url: https://www.sciencedirect.com/ science/article/pii/S0933365707001443. David Andrzejewski, Xiaojin Zhu, and Mark Craven. “Incorporating domain knowledge into topic modeling via Dirichlet forest priors”. In: ACM International Conference Proceeding Series. Vol. 382. 2009. isbn: 9781605585161. doi: 10 . 1145 / 1553374 . 1553378.Bibliography 75 Jonathan Chang et al. Reading Tea Leaves: How Humans Interpret Topic Models. Tech. rep. 2009. url: http://rexa.info. Tomoaki Nakamura, Takayuki Nagai, and Naoto Iwahashi. “Grounding of word mean- ings in multimodal concepts using LDA”. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Oct. 2009, pp. 3943–3948. isbn: 978-1-4244- 3803-7. doi: 10.1109/IROS.2009.5354736. url: http://ieeexplore.ieee.org/ document/5354736/. Guangcan Liu et al. “Robust Recovery of Subspace Structures by Low-Rank Repre- sentation”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 35.1 (Oct. 2010), pp. 171–184. doi: 10.1109/TPAMI.2012.88. url: http://arxiv.org/ abs/1010.2955%20http://dx.doi.org/10.1109/TPAMI.2012.88. James Petterson et al. Word Features for Latent Dirichlet Allocation. Tech. rep. 2010, pp. 1921–1929. Nakatani Shuyo. Language Detection Library for Java. 2010. url: http : / / code . google.com/p/language-detection/. Abhishek Kumar and Hal Daumé III. A Co-training Approach for Multi-view Spectral Clustering. Tech. rep. 2011. url: http://legacydirs.umiacs.umd.edu/~abhishek/ cospectral.icml11.pdf. Abhishek Kumar, Piyush Rai, and Hal Daumé III. Co-regularized Multi-view Spectral Clustering. Tech. rep. 2011. David Mimno et al. Optimizing Semantic Coherence in Topic Models. Tech. rep. 2011, pp. 262–272. url: https://www.aclweb.org/anthology/D11-1024.pdf. Tomoaki Nakamura, Takayuki Nagai, and Naoto Iwahashi. “Bag of multimodal LDA models for concept formation”. In: 2011 IEEE International Conference on Robotics and Automation. IEEE, May 2011, pp. 6233–6238. isbn: 978-1-61284-386-5. doi: 10. 1109 / ICRA . 2011 . 5980324. url: http : / / ieeexplore . ieee . org / document / 5980324/. Ehsan Elhamifar and Rene Vidal. “Sparse Subspace Clustering: Algorithm, Theory, and Applications”. In: IEEE Transactions on Pattern Analysis and Machine Intelli- gence 35.11 (Mar. 2012), pp. 2765–2781. url: http://arxiv.org/abs/1203.1005. Jagadeesh Jagarlamudi, Hal Daumé Iii, and Raghavendra Udupa. Incorporating Lexical Priors into Topic Models. Tech. rep. 2012, pp. 204–213. url: https://www.aclweb. org/anthology/E12-1021.pdf. Xiao Cai, Feiping Nie, and Heng Huang. Multi-View K-Means Clustering on Big Data. Tech. rep. 2013. url: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10. 1.1.415.8610&rep=rep1&type=pdf.76 Bibliography Zhiyuan Chen et al. “Discovering Coherent Topics Using General Knowledge Data Mining View project Web-KDD-KDD Workshop Series on Web Mining and Web Usage Analysis View project Discovering Coherent Topics Using General Knowledge”. In: dl.acm.org (2013), pp. 209–218. doi: 10.1145/2505515.2505519. url: http://dx. doi.org/10.1145/2505515.2505519. Zhiyuan Chen et al. “Leveraging Multi-Domain Prior Knowledge in Topic Models”. In: IJCAI International Joint Conference on Artificial Intelligence. Nov. 2013, pp. 2071– 2077. Linmei Hu et al. “Incorporating entities in news topic modeling”. In: Communications in Computer and Information Science. Vol. 400. Springer Verlag, Nov. 2013, pp. 139– 150. isbn: 9783642416439. doi: 10.1007/978-3-642-41644-6{\_}14. url: https: //link.springer.com/chapter/10.1007/978-3-642-41644-6_14. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. “Linguistic Regularities in Contin- uous Space Word Representations”. In: June (2013), pp. 746–751. Tomas Mikolov et al. Distributed Representations of Words and Phrases and their Compositionality. Tech. rep. 2013. url: http : / / papers . nips . cc / paper / 5021 - distributed-representations-of-words-and-phrases-and. Tomas Mikolov et al. “Efficient estimation of word representations in vector space”. In: 1st International Conference on Learning Representations, ICLR 2013 - Workshop Track Proceedings. International Conference on Learning Representations, ICLR, Jan. 2013. Konstantinos N. Vavliakis, Andreas L. Symeonidis, and Pericles A. Mitkas. “Event identification in web social media through named entity recognition and topic model- ing”. In: Data and Knowledge Engineering 88 (Nov. 2013), pp. 1–24. issn: 0169023X. doi: 10.1016/j.datak.2013.08.006. Yuening Hu et al. “Interactive topic modeling”. In: Mach Learn 95 (2014), pp. 423– 469. doi: 10.1007/s10994- 013- 5413- 0. url: http://www.policyagendas.org/ page/topic-codebook.. Yeqing Li et al. Large-Scale Multi-View Spectral Clustering with Bipartite Graph. Tech. rep. 2015. url: https://dl.acm.org/doi/10.5555/2886521.2886704. Zechao Li et al. “Robust structured subspace learning for data representation”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.10 (Oct. 2015), pp. 2085–2098. issn: 01628828. doi: 10.1109/TPAMI.2015.2400461. url: https: //ieeexplore.ieee.org/document/7031960.Bibliography 77 Andrew J. McMinn and Joemon M. Jose. “Real-time entity-based event detection for twitter”. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9283. Springer Verlag, 2015, pp. 65–77. isbn: 9783319240268. doi: 10.1007/978-3-319-24027-5{\_}6. url: https://link.springer.com/chapter/10.1007/978-3-319-24027-5_6. John Paisley et al. “Nested hierarchical dirichlet processes”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.2 (Feb. 2015), pp. 256–270. issn: 01628828. doi: 10.1109/TPAMI.2014.2318728. url: https://ieeexplore.ieee. org/abstract/document/6802355. Zhao Zhang et al. “Joint low-rank and sparse principal feature coding for enhanced robust representation and visual classification”. In: IEEE Transactions on Image Pro- cessing 25.6 (June 2016), pp. 2429–2443. issn: 10577149. doi: 10.1109/TIP.2016. 2547180. url: https://ieeexplore.ieee.org/document/7442126. Mehdi Allahyari and Krys Kochut. “Discovering Coherent Topics with Entity Topic Models”. In: Proceedings - 2016 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2016. Institute of Electrical and Electronics Engineers Inc., Jan. 2017, pp. 26–33. isbn: 9781509044702. doi: 10.1109/WI.2016.0015. Peixian Chen et al. “Latent Tree Models for Hierarchical Topic Detection”. In: Artificial Intelligence 250 (May 2017), pp. 105–124. url: http://arxiv.org/abs/1605.06650. Zhourong Chen et al. Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis. Tech. rep. 2017. url: www.aaai.org. Matthew Honnibal and Ines Montani. “spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing”. 2017. Ashish Vaswani et al. “Transformer: Attention is all you need”. In: Advances in Neu- ral Information Processing Systems 30 (2017), pp. 5998–6008. issn: 10495258. url: https://arxiv.org/abs/1706.03762. Jing Zhao et al. “Multi-view learning overview: Recent progress and new challenges”. In: Information Fusion 38 (2017), pp. 43–54. issn: 15662535. doi: 10.1016/j.inffus. 2017.02.007. url: http://dx.doi.org/10.1016/j.inffus.2017.02.007. Xiaojun Chen et al. “Spectral clustering of large-scale data by directly solving normal- ized cut”. In: Proceedings of the ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining. Association for Computing Machinery, July 2018, pp. 1206–1215. isbn: 9781450355520. doi: 10.1145/3219819.3220039. url: https: //dl.acm.org/doi/10.1145/3219819.3220039. Jacob Devlin et al. “BERT: Pre-training of Deep Bidirectional Transformers for Lan- guage Understanding”. In: (Oct. 2018). url: http://arxiv.org/abs/1810.04805.78 Bibliography Zhao Kang et al. “Multi-graph Fusion for Multi-view Spectral Clustering”. In: Knowledge- Based Systems 189 (Sept. 2019). url: http://arxiv.org/abs/1909.06940. Alec Radford et al. “Language Models are Unsupervised Multitask Learners”. In: (2019). url: http://www.persagen.com/files/misc/radford2019language.pdf. [54] Tom B. Brown et al. “Language Models are Few-Shot Learners”. In: arXiv (May 2020). url: http://arxiv.org/abs/2005.14165. |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.license.spa.fl_str_mv |
Reconocimiento 4.0 Internacional |
dc.rights.uri.spa.fl_str_mv |
http://creativecommons.org/licenses/by/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Reconocimiento 4.0 Internacional http://creativecommons.org/licenses/by/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
1 recurso en línea (88 páginas) |
dc.format.mimetype.spa.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.publisher.program.spa.fl_str_mv |
Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación |
dc.publisher.department.spa.fl_str_mv |
Departamento de Ingeniería de Sistemas e Industrial |
dc.publisher.faculty.spa.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.place.spa.fl_str_mv |
Bogotá |
dc.publisher.branch.spa.fl_str_mv |
Universidad Nacional de Colombia - Sede Bogotá |
institution |
Universidad Nacional de Colombia |
bitstream.url.fl_str_mv |
https://repositorio.unal.edu.co/bitstream/unal/79567/1/license.txt https://repositorio.unal.edu.co/bitstream/unal/79567/2/1019125483.2021.pdf https://repositorio.unal.edu.co/bitstream/unal/79567/3/license_rdf https://repositorio.unal.edu.co/bitstream/unal/79567/4/1019125483.2021.pdf.jpg |
bitstream.checksum.fl_str_mv |
cccfe52f796b7c63423298c2d3365fc6 0173f8aed0343c4e76f03197e3f12e0d 0175ea4a2d4caec4bbcc37e300941108 bec91408ba2b9936cdadfaddaca26ac0 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
repository.mail.fl_str_mv |
repositorio_nal@unal.edu.co |
_version_ |
1814089960966324224 |
spelling |
Reconocimiento 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Niño Vasquez, Luis Fernando529ee5e1893682de94fcec58bfe1f82bCalero Espinosa, Juan Camilo3f1b131fd804c6f18dd1e59204eccf7cLABORATORIO DE INVESTIGACIÓN EN SISTEMAS INTELIGENTES - LISI2021-05-26T16:54:28Z2021-05-26T16:54:28Z2021https://repositorio.unal.edu.co/handle/unal/79567Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/diagramas, ilustraciones a color, tablasTopic detection on a large corpus of documents requires a considerable amount of computational resources, and the number of topics increases the burden as well. However, even a large number of topics might not be as specific as desired, or simply the topic quality starts decreasing after a certain number. To overcome these obstacles, we propose a new methodology for hierarchical topic detection, which uses multi-view clustering to link different topic models extracted from document named entities and part of speech tags. Results on three different datasets evince that the methodology decreases the memory cost of topic detection, improves topic quality and allows the detection of more topics.La detección de temas en grandes colecciones de documentos requiere una considerable cantidad de recursos computacionales, y el número de temas también puede aumentar la carga computacional. Incluso con un elevado nùmero de temas, estos pueden no ser tan específicos como se desea, o simplemente la calidad de los temas comienza a disminuir después de cierto número. Para superar estos obstáculos, proponemos una nueva metodología para la detección jerárquica de temas, que utiliza agrupamiento multi-vista para vincular diferentes modelos de temas extraídos de las partes del discurso y de las entidades nombradas de los documentos. Los resultados en tres conjuntos de documentos muestran que la metodología disminuye el costo en memoria de la detección de temas, permitiendo detectar màs temas y al mismo tiempo mejorar su calidad.MaestríaMagíster en Ingeniería – Sistemas y ComputaciónProcesamiento de lenguaje natural1 recurso en línea (88 páginas)application/pdfengUniversidad Nacional de ColombiaBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y ComputaciónDepartamento de Ingeniería de Sistemas e IndustrialFacultad de IngenieríaBogotáUniversidad Nacional de Colombia - Sede Bogotá000 - Ciencias de la computación, información y obras generalesNamed entitiesTopic detectionMulti-view clusteringMulti-view learningGraph fusionEntidades nombradasAprendizaje multi-vistaAgrupamiento multi-vistaFusión de grafosIndexación automáticaRecuperación de informaciónInformation processingAutomatic indexingMulti-view learning for hierarchical topic detection on corpus of documentsAprendizaje multi-vista para la detección jerárquica de temas en corpus de documentosTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMStephen E. Palmer. “Hierarchical structure in perceptual representation”. In: Cogni- tive Psychology 9.4 (Oct. 1977), pp. 441–474. issn: 0010-0285. doi: 10.1016/0010- 0285(77)90016-0. url: https://www.sciencedirect.com/science/article/pii/ 0010028577900160.E. Wachsmuth, M. W. Oram, and D. I. Perrett. “Recognition of Objects and Their Component Parts: Responses of Single Units in the Temporal Cortex of the Macaque”. In: Cerebral Cortex 4.5 (Sept. 1994), pp. 509–522. issn: 1047-3211. doi: 10.1093/ cercor/4.5.509. url: https://academic.oup.com/cercor/article-lookup/doi/ 10.1093/cercor/4.5.509.N K Logothetis and D L Sheinberg. “Visual Object Recognition”. In: Annual Review of Neuroscience 19.1 (Mar. 1996), pp. 577–621. issn: 0147-006X. doi: 10.1146/annurev. ne . 19 . 030196 . 003045. url: http : / / www . annualreviews . org / doi / 10 . 1146 / annurev.ne.19.030196.003045.Daniel D. Lee and H. Sebastian Seung. “Learning the parts of objects by non-negative matrix factorization”. In: Nature 401.6755 (Oct. 1999), pp. 788–791. issn: 00280836. doi: 10.1038/44565. url: http://www.nature.com/articles/44565.David M. Blei, Andrew Y. Ng, and Michael I. Jordan. “Latent Dirichlet Allocation”. In: Journal of Machine Learning Research 3.Jan (2003), pp. 993–1022. issn: ISSN 1533-7928. url: http://www.jmlr.org/papers/v3/blei03a.html.Thomas L. Griffiths et al. “Hierarchical Topic Models and the Nested Chinese Restau- rant Process”. In: Advances in Neural Information Processing Systems (2003), pp. 17– 24. url: https://papers.nips.cc/paper/2466- hierarchical- topic- models- and-the-nested-chinese%20-restaurant-process.pdf.Stella X. Yu and Jianbo Shi. “Multiclass spectral clustering”. In: Proceedings of the IEEE International Conference on Computer Vision. Vol. 1. Institute of Electrical and Electronics Engineers Inc., 2003, pp. 313–319. doi: 10.1109/iccv.2003.1238361. url: https://ieeexplore.ieee.org/abstract/document/1238361.S. Bickel and T. Scheffer. “Multi-View Clustering”. In: Fourth IEEE International Conference on Data Mining (ICDM’04). IEEE, 2004, pp. 19–26. isbn: 0-7695-2142-8. doi: 10.1109/ICDM.2004.10095. url: http://ieeexplore.ieee.org/document/ 1410262/.74 BibliographyNevin L Zhang and Lzhang@cs Ust Hk. Hierarchical Latent Class Models for Cluster Analysis. Tech. rep. 2004, pp. 697–723. url: https : / / www . jmlr . org / papers / volume5/zhang04a/zhang04a.pdf.David Newman, Chaitanya Chemudugunta, and Padhraic Smyth. “Statistical entity- topic models”. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Vol. 2006. Association for Computing Ma- chinery, 2006, pp. 680–686. isbn: 1595933395. doi: 10.1145/1150402.1150487.Li Wei and Andrew McCallum. “Pachinko allocation: DAG-structured mixture models of topic correlations”. In: ACM International Conference Proceeding Series. Vol. 148. 2006, pp. 577–584. isbn: 1595933832. doi: 10.1145/1143844.1143917. url: https: //dl.acm.org/doi/abs/10.1145/1143844.1143917.David M. Blei, Thomas L. Griffiths, and Michael I. Jordan. “The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies”. In: (Oct. 2007). url: https://arxiv.org/abs/0710.0845.David Mimno, Wei Li, and Andrew McCallum. “Mixtures of hierarchical topics with Pachinko allocation”. In: ACM International Conference Proceeding Series. Vol. 227. 2007, pp. 633–640. doi: 10.1145/1273496.1273576. url: https://dl.acm.org/ doi/abs/10.1145/1273496.1273576.Tomoaki Nakamura, Takayuki Nagai, and Naoto Iwahashi. “Multimodal object cat- egorization by a robot”. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Oct. 2007, pp. 2415–2420. isbn: 978-1-4244-0911-2. doi: 10 . 1109 / IROS . 2007 . 4399634. url: http : / / ieeexplore . ieee . org / document / 4399634/.Chaitanya Chemudugunta et al. “Modeling Documents by Combining Semantic Con- cepts with Unsupervised Statistical Learning”. In: 2008, pp. 229–244. doi: 10.1007/ 978-3-540-88564-1{\_}15.Yi Wang, Nevin L. Zhang, and Tao Chen. “Latent tree models and approximate in- ference in Bayesian networks”. In: Journal of Artificial Intelligence Research 32 (Aug. 2008), pp. 879–900. issn: 10769757. doi: 10.1613/jair.2530. url: https://www. jair.org/index.php/jair/article/view/10564.Nevin L. Zhang et al. “Latent tree models and diagnosis in traditional Chinese medicine”. In: Artificial Intelligence in Medicine 42.3 (Mar. 2008), pp. 229–245. issn: 09333657. doi: 10.1016/j.artmed.2007.10.004. url: https://www.sciencedirect.com/ science/article/pii/S0933365707001443.David Andrzejewski, Xiaojin Zhu, and Mark Craven. “Incorporating domain knowledge into topic modeling via Dirichlet forest priors”. In: ACM International Conference Proceeding Series. Vol. 382. 2009. isbn: 9781605585161. doi: 10 . 1145 / 1553374 . 1553378.Bibliography 75Jonathan Chang et al. Reading Tea Leaves: How Humans Interpret Topic Models. Tech. rep. 2009. url: http://rexa.info.Tomoaki Nakamura, Takayuki Nagai, and Naoto Iwahashi. “Grounding of word mean- ings in multimodal concepts using LDA”. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Oct. 2009, pp. 3943–3948. isbn: 978-1-4244- 3803-7. doi: 10.1109/IROS.2009.5354736. url: http://ieeexplore.ieee.org/ document/5354736/.Guangcan Liu et al. “Robust Recovery of Subspace Structures by Low-Rank Repre- sentation”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 35.1 (Oct. 2010), pp. 171–184. doi: 10.1109/TPAMI.2012.88. url: http://arxiv.org/ abs/1010.2955%20http://dx.doi.org/10.1109/TPAMI.2012.88.James Petterson et al. Word Features for Latent Dirichlet Allocation. Tech. rep. 2010, pp. 1921–1929.Nakatani Shuyo. Language Detection Library for Java. 2010. url: http : / / code . google.com/p/language-detection/.Abhishek Kumar and Hal Daumé III. A Co-training Approach for Multi-view Spectral Clustering. Tech. rep. 2011. url: http://legacydirs.umiacs.umd.edu/~abhishek/ cospectral.icml11.pdf.Abhishek Kumar, Piyush Rai, and Hal Daumé III. Co-regularized Multi-view Spectral Clustering. Tech. rep. 2011.David Mimno et al. Optimizing Semantic Coherence in Topic Models. Tech. rep. 2011, pp. 262–272. url: https://www.aclweb.org/anthology/D11-1024.pdf.Tomoaki Nakamura, Takayuki Nagai, and Naoto Iwahashi. “Bag of multimodal LDA models for concept formation”. In: 2011 IEEE International Conference on Robotics and Automation. IEEE, May 2011, pp. 6233–6238. isbn: 978-1-61284-386-5. doi: 10. 1109 / ICRA . 2011 . 5980324. url: http : / / ieeexplore . ieee . org / document / 5980324/.Ehsan Elhamifar and Rene Vidal. “Sparse Subspace Clustering: Algorithm, Theory, and Applications”. In: IEEE Transactions on Pattern Analysis and Machine Intelli- gence 35.11 (Mar. 2012), pp. 2765–2781. url: http://arxiv.org/abs/1203.1005.Jagadeesh Jagarlamudi, Hal Daumé Iii, and Raghavendra Udupa. Incorporating Lexical Priors into Topic Models. Tech. rep. 2012, pp. 204–213. url: https://www.aclweb. org/anthology/E12-1021.pdf.Xiao Cai, Feiping Nie, and Heng Huang. Multi-View K-Means Clustering on Big Data. Tech. rep. 2013. url: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10. 1.1.415.8610&rep=rep1&type=pdf.76 BibliographyZhiyuan Chen et al. “Discovering Coherent Topics Using General Knowledge Data Mining View project Web-KDD-KDD Workshop Series on Web Mining and Web Usage Analysis View project Discovering Coherent Topics Using General Knowledge”. In: dl.acm.org (2013), pp. 209–218. doi: 10.1145/2505515.2505519. url: http://dx. doi.org/10.1145/2505515.2505519.Zhiyuan Chen et al. “Leveraging Multi-Domain Prior Knowledge in Topic Models”. In: IJCAI International Joint Conference on Artificial Intelligence. Nov. 2013, pp. 2071– 2077.Linmei Hu et al. “Incorporating entities in news topic modeling”. In: Communications in Computer and Information Science. Vol. 400. Springer Verlag, Nov. 2013, pp. 139– 150. isbn: 9783642416439. doi: 10.1007/978-3-642-41644-6{\_}14. url: https: //link.springer.com/chapter/10.1007/978-3-642-41644-6_14.Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. “Linguistic Regularities in Contin- uous Space Word Representations”. In: June (2013), pp. 746–751.Tomas Mikolov et al. Distributed Representations of Words and Phrases and their Compositionality. Tech. rep. 2013. url: http : / / papers . nips . cc / paper / 5021 - distributed-representations-of-words-and-phrases-and.Tomas Mikolov et al. “Efficient estimation of word representations in vector space”. In: 1st International Conference on Learning Representations, ICLR 2013 - Workshop Track Proceedings. International Conference on Learning Representations, ICLR, Jan. 2013.Konstantinos N. Vavliakis, Andreas L. Symeonidis, and Pericles A. Mitkas. “Event identification in web social media through named entity recognition and topic model- ing”. In: Data and Knowledge Engineering 88 (Nov. 2013), pp. 1–24. issn: 0169023X. doi: 10.1016/j.datak.2013.08.006.Yuening Hu et al. “Interactive topic modeling”. In: Mach Learn 95 (2014), pp. 423– 469. doi: 10.1007/s10994- 013- 5413- 0. url: http://www.policyagendas.org/ page/topic-codebook..Yeqing Li et al. Large-Scale Multi-View Spectral Clustering with Bipartite Graph. Tech. rep. 2015. url: https://dl.acm.org/doi/10.5555/2886521.2886704.Zechao Li et al. “Robust structured subspace learning for data representation”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.10 (Oct. 2015), pp. 2085–2098. issn: 01628828. doi: 10.1109/TPAMI.2015.2400461. url: https: //ieeexplore.ieee.org/document/7031960.Bibliography 77Andrew J. McMinn and Joemon M. Jose. “Real-time entity-based event detection for twitter”. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9283. Springer Verlag, 2015, pp. 65–77. isbn: 9783319240268. doi: 10.1007/978-3-319-24027-5{\_}6. url: https://link.springer.com/chapter/10.1007/978-3-319-24027-5_6.John Paisley et al. “Nested hierarchical dirichlet processes”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.2 (Feb. 2015), pp. 256–270. issn: 01628828. doi: 10.1109/TPAMI.2014.2318728. url: https://ieeexplore.ieee. org/abstract/document/6802355.Zhao Zhang et al. “Joint low-rank and sparse principal feature coding for enhanced robust representation and visual classification”. In: IEEE Transactions on Image Pro- cessing 25.6 (June 2016), pp. 2429–2443. issn: 10577149. doi: 10.1109/TIP.2016. 2547180. url: https://ieeexplore.ieee.org/document/7442126.Mehdi Allahyari and Krys Kochut. “Discovering Coherent Topics with Entity Topic Models”. In: Proceedings - 2016 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2016. Institute of Electrical and Electronics Engineers Inc., Jan. 2017, pp. 26–33. isbn: 9781509044702. doi: 10.1109/WI.2016.0015.Peixian Chen et al. “Latent Tree Models for Hierarchical Topic Detection”. In: Artificial Intelligence 250 (May 2017), pp. 105–124. url: http://arxiv.org/abs/1605.06650.Zhourong Chen et al. Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis. Tech. rep. 2017. url: www.aaai.org.Matthew Honnibal and Ines Montani. “spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing”. 2017.Ashish Vaswani et al. “Transformer: Attention is all you need”. In: Advances in Neu- ral Information Processing Systems 30 (2017), pp. 5998–6008. issn: 10495258. url: https://arxiv.org/abs/1706.03762.Jing Zhao et al. “Multi-view learning overview: Recent progress and new challenges”. In: Information Fusion 38 (2017), pp. 43–54. issn: 15662535. doi: 10.1016/j.inffus. 2017.02.007. url: http://dx.doi.org/10.1016/j.inffus.2017.02.007.Xiaojun Chen et al. “Spectral clustering of large-scale data by directly solving normal- ized cut”. In: Proceedings of the ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining. Association for Computing Machinery, July 2018, pp. 1206–1215. isbn: 9781450355520. doi: 10.1145/3219819.3220039. url: https: //dl.acm.org/doi/10.1145/3219819.3220039.Jacob Devlin et al. “BERT: Pre-training of Deep Bidirectional Transformers for Lan- guage Understanding”. In: (Oct. 2018). url: http://arxiv.org/abs/1810.04805.78 BibliographyZhao Kang et al. “Multi-graph Fusion for Multi-view Spectral Clustering”. In: Knowledge- Based Systems 189 (Sept. 2019). url: http://arxiv.org/abs/1909.06940.Alec Radford et al. “Language Models are Unsupervised Multitask Learners”. In: (2019). url: http://www.persagen.com/files/misc/radford2019language.pdf. [54] Tom B. Brown et al. “Language Models are Few-Shot Learners”. In: arXiv (May 2020). url: http://arxiv.org/abs/2005.14165.LICENSElicense.txtlicense.txttext/plain; charset=utf-83964https://repositorio.unal.edu.co/bitstream/unal/79567/1/license.txtcccfe52f796b7c63423298c2d3365fc6MD51ORIGINAL1019125483.2021.pdf1019125483.2021.pdfTesis de Maestría en Ingeniería - Ingeniería de Sistemas y Computaciónapplication/pdf5163528https://repositorio.unal.edu.co/bitstream/unal/79567/2/1019125483.2021.pdf0173f8aed0343c4e76f03197e3f12e0dMD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8908https://repositorio.unal.edu.co/bitstream/unal/79567/3/license_rdf0175ea4a2d4caec4bbcc37e300941108MD53THUMBNAIL1019125483.2021.pdf.jpg1019125483.2021.pdf.jpgGenerated Thumbnailimage/jpeg4221https://repositorio.unal.edu.co/bitstream/unal/79567/4/1019125483.2021.pdf.jpgbec91408ba2b9936cdadfaddaca26ac0MD54unal/79567oai:repositorio.unal.edu.co:unal/795672023-07-20 23:03:23.082Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUExBTlRJTExBIERFUMOTU0lUTwoKQ29tbyBlZGl0b3IgZGUgZXN0ZSDDrXRlbSwgdXN0ZWQgcHVlZGUgbW92ZXJsbyBhIHJldmlzacOzbiBzaW4gYW50ZXMgcmVzb2x2ZXIgbG9zIHByb2JsZW1hcyBpZGVudGlmaWNhZG9zLCBkZSBsbyBjb250cmFyaW8sIGhhZ2EgY2xpYyBlbiBHdWFyZGFyIHBhcmEgZ3VhcmRhciBlbCDDrXRlbSB5IHNvbHVjaW9uYXIgZXN0b3MgcHJvYmxlbWFzIG1hcyB0YXJkZS4KCk5PVEFTOgoqU0kgTEEgVEVTSVMgQSBQVUJMSUNBUiBBRFFVSVJJw5MgQ09NUFJPTUlTT1MgREUgQ09ORklERU5DSUFMSURBRCBFTiBFTCBERVNBUlJPTExPIE8gUEFSVEVTIERFTCBET0NVTUVOVE8uIFNJR0EgTEEgRElSRUNUUklaIERFIExBIFJFU09MVUNJw5NOIDAyMyBERSAyMDE1LCBQT1IgTEEgQ1VBTCBTRSBFU1RBQkxFQ0UgRUwgUFJPQ0VESU1JRU5UTyBQQVJBIExBIFBVQkxJQ0FDScOTTiBERSBURVNJUyBERSBNQUVTVFLDjUEgWSBET0NUT1JBRE8gREUgTE9TIEVTVFVESUFOVEVTIERFIExBIFVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU4sIEVYUEVESURBIFBPUiBMQSBTRUNSRVRBUsONQSBHRU5FUkFMLgoqTEEgVEVTSVMgQSBQVUJMSUNBUiBERUJFIFNFUiBMQSBWRVJTScOTTiBGSU5BTCBBUFJPQkFEQS4KUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igc3UgcHJvcGlvIGF1dG9yOiBBbCBhdXRvYXJjaGl2YXIgZXN0ZSBncnVwbyBkZSBhcmNoaXZvcyBkaWdpdGFsZXMgeSBzdXMgbWV0YWRhdG9zLCBZbyBnYXJhbnRpem8gYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVTiBlbCBkZXJlY2hvIGEgYWxtYWNlbmFybG9zIHkgbWFudGVuZXJsb3MgZGlzcG9uaWJsZXMgZW4gbMOtbmVhIGRlIG1hbmVyYSBncmF0dWl0YS4gRGVjbGFybyBxdWUgZGljaG8gbWF0ZXJpYWwgZXMgZGUgbWkgcHJvcGllZGFkIGludGVsZWN0dWFsIHkgcXVlIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU4gbm8gYXN1bWUgbmluZ3VuYSByZXNwb25zYWJpbGlkYWQgc2kgaGF5IGFsZ3VuYSB2aW9sYWNpw7NuIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGFsIGRpc3RyaWJ1aXIgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuIChTZSByZWNvbWllbmRhIGEgdG9kb3MgbG9zIGF1dG9yZXMgYSBpbmRpY2FyIHN1cyBkZXJlY2hvcyBkZSBhdXRvciBlbiBsYSBww6FnaW5hIGRlIHTDrXR1bG8gZGUgc3UgZG9jdW1lbnRvLikgRGUgbGEgbWlzbWEgbWFuZXJhLCBhY2VwdG8gbG9zIHTDqXJtaW5vcyBkZSBsYSBzaWd1aWVudGUgbGljZW5jaWE6IExvcyBhdXRvcmVzIG8gdGl0dWxhcmVzIGRlbCBkZXJlY2hvIGRlIGF1dG9yIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gY29uZmllcmVuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgdW5hIGxpY2VuY2lhIG5vIGV4Y2x1c2l2YSwgbGltaXRhZGEgeSBncmF0dWl0YSBzb2JyZSBsYSBvYnJhIHF1ZSBzZSBpbnRlZ3JhIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwsIHF1ZSBzZSBhanVzdGEgYSBsYXMgc2lndWllbnRlcyBjYXJhY3RlcsOtc3RpY2FzOiBhKSBFc3RhcsOhIHZpZ2VudGUgYSBwYXJ0aXIgZGUgbGEgZmVjaGEgZW4gcXVlIHNlIGluY2x1eWUgZW4gZWwgcmVwb3NpdG9yaW8sIHF1ZSBzZXLDoW4gcHJvcnJvZ2FibGVzIGluZGVmaW5pZGFtZW50ZSBwb3IgZWwgdGllbXBvIHF1ZSBkdXJlIGVsIGRlcmVjaG8gcGF0cmltb25pYWwgZGVsIGF1dG9yLiBFbCBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgbGljZW5jaWEgc29saWNpdMOhbmRvbG8gYSBsYSBVbml2ZXJzaWRhZC4gYikgTG9zIGF1dG9yZXMgYXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBwdWJsaWNhciBsYSBvYnJhIGVuIGVsIGZvcm1hdG8gcXVlIGVsIHJlcG9zaXRvcmlvIGxvIHJlcXVpZXJhIChpbXByZXNvLCBkaWdpdGFsLCBlbGVjdHLDs25pY28gbyBjdWFscXVpZXIgb3RybyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSB5IGNvbm9jZW4gcXVlIGRhZG8gcXVlIHNlIHB1YmxpY2EgZW4gSW50ZXJuZXQgcG9yIGVzdGUgaGVjaG8gY2lyY3VsYSBjb24gdW4gYWxjYW5jZSBtdW5kaWFsLiBjKSBMb3MgYXV0b3JlcyBhY2VwdGFuIHF1ZSBsYSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgYSB0w610dWxvIGdyYXR1aXRvLCBwb3IgbG8gdGFudG8sIHJlbnVuY2lhbiBhIHJlY2liaXIgZW1vbHVtZW50byBhbGd1bm8gcG9yIGxhIHB1YmxpY2FjacOzbiwgZGlzdHJpYnVjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGN1YWxxdWllciBvdHJvIHVzbyBxdWUgc2UgaGFnYSBlbiBsb3MgdMOpcm1pbm9zIGRlIGxhIHByZXNlbnRlIGxpY2VuY2lhIHkgZGUgbGEgbGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyBjb24gcXVlIHNlIHB1YmxpY2EuIGQpIExvcyBhdXRvcmVzIG1hbmlmaWVzdGFuIHF1ZSBzZSB0cmF0YSBkZSB1bmEgb2JyYSBvcmlnaW5hbCBzb2JyZSBsYSBxdWUgdGllbmVuIGxvcyBkZXJlY2hvcyBxdWUgYXV0b3JpemFuIHkgcXVlIHNvbiBlbGxvcyBxdWllbmVzIGFzdW1lbiB0b3RhbCByZXNwb25zYWJpbGlkYWQgcG9yIGVsIGNvbnRlbmlkbyBkZSBzdSBvYnJhIGFudGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgeSBhbnRlIHRlcmNlcm9zLiBFbiB0b2RvIGNhc28gbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgc2UgY29tcHJvbWV0ZSBhIGluZGljYXIgc2llbXByZSBsYSBhdXRvcsOtYSBpbmNsdXllbmRvIGVsIG5vbWJyZSBkZWwgYXV0b3IgeSBsYSBmZWNoYSBkZSBwdWJsaWNhY2nDs24uIGUpIExvcyBhdXRvcmVzIGF1dG9yaXphbiBhIGxhIFVuaXZlcnNpZGFkIHBhcmEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyDDrW5kaWNlcyB5IGJ1c2NhZG9yZXMgcXVlIGVzdGltZW4gbmVjZXNhcmlvcyBwYXJhIHByb21vdmVyIHN1IGRpZnVzacOzbi4gZikgTG9zIGF1dG9yZXMgYWNlcHRhbiBxdWUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcHVlZGEgY29udmVydGlyIGVsIGRvY3VtZW50byBhIGN1YWxxdWllciBtZWRpbyBvIGZvcm1hdG8gcGFyYSBwcm9ww7NzaXRvcyBkZSBwcmVzZXJ2YWNpw7NuIGRpZ2l0YWwuIFNJIEVMIERPQ1VNRU5UTyBTRSBCQVNBIEVOIFVOIFRSQUJBSk8gUVVFIEhBIFNJRE8gUEFUUk9DSU5BRE8gTyBBUE9ZQURPIFBPUiBVTkEgQUdFTkNJQSBPIFVOQSBPUkdBTklaQUNJw5NOLCBDT04gRVhDRVBDScOTTiBERSBMQSBVTklWRVJTSURBRCBOQUNJT05BTCBERSBDT0xPTUJJQSwgTE9TIEFVVE9SRVMgR0FSQU5USVpBTiBRVUUgU0UgSEEgQ1VNUExJRE8gQ09OIExPUyBERVJFQ0hPUyBZIE9CTElHQUNJT05FUyBSRVFVRVJJRE9TIFBPUiBFTCBSRVNQRUNUSVZPIENPTlRSQVRPIE8gQUNVRVJETy4KUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igb3RyYXMgcGVyc29uYXMgZGlzdGludGFzIGEgc3UgYXV0b3I6IERlY2xhcm8gcXVlIGVsIGdydXBvIGRlIGFyY2hpdm9zIGRpZ2l0YWxlcyB5IG1ldGFkYXRvcyBhc29jaWFkb3MgcXVlIGVzdG95IGFyY2hpdmFuZG8gZW4gZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVTikgZXMgZGUgZG9taW5pbyBww7pibGljby4gU2kgbm8gZnVlc2UgZWwgY2FzbywgYWNlcHRvIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIHBvciBjdWFscXVpZXIgaW5mcmFjY2nDs24gZGUgZGVyZWNob3MgZGUgYXV0b3IgcXVlIGNvbmxsZXZlIGxhIGRpc3RyaWJ1Y2nDs24gZGUgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuCkFsIGhhY2VyIGNsaWMgZW4gZWwgc2lndWllbnRlIGJvdMOzbiwgdXN0ZWQgaW5kaWNhIHF1ZSBlc3TDoSBkZSBhY3VlcmRvIGNvbiBlc3RvcyB0w6lybWlub3MuCgpVTklWRVJTSURBRCBOQUNJT05BTCBERSBDT0xPTUJJQSAtIMOabHRpbWEgbW9kaWZpY2FjacOzbiAyNy8yMC8yMDIwCg== |