ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes
Accurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand...
- Autores:
-
Caicedo-Torres, William
Gutierrez, Jairo
- Tipo de recurso:
- Fecha de publicación:
- 2022
- Institución:
- Universidad Tecnológica de Bolívar
- Repositorio:
- Repositorio Institucional UTB
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.utb.edu.co:20.500.12585/12197
- Acceso en línea:
- https://hdl.handle.net/20.500.12585/12197
- Palabra clave:
- Imbalanced Data;
Cost-Sensitive Learning;
Data Classification
LEMB
- Rights
- openAccess
- License
- http://creativecommons.org/licenses/by-nc-nd/4.0/
id |
UTB2_904b6e6b4769de958a1e0b91ec25e824 |
---|---|
oai_identifier_str |
oai:repositorio.utb.edu.co:20.500.12585/12197 |
network_acronym_str |
UTB2 |
network_name_str |
Repositorio Institucional UTB |
repository_id_str |
|
dc.title.spa.fl_str_mv |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
title |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
spellingShingle |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes Imbalanced Data; Cost-Sensitive Learning; Data Classification LEMB |
title_short |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
title_full |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
title_fullStr |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
title_full_unstemmed |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
title_sort |
ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes |
dc.creator.fl_str_mv |
Caicedo-Torres, William Gutierrez, Jairo |
dc.contributor.author.none.fl_str_mv |
Caicedo-Torres, William Gutierrez, Jairo |
dc.subject.keywords.spa.fl_str_mv |
Imbalanced Data; Cost-Sensitive Learning; Data Classification |
topic |
Imbalanced Data; Cost-Sensitive Learning; Data Classification LEMB |
dc.subject.armarc.none.fl_str_mv |
LEMB |
description |
Accurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand Deep Learning holds promise to positively impact clinical practice by leveraging medical data to assist diagnosis and prediction, including mortality prediction. However, as the question of whether powerful Deep Learning models attend correlations backed by sound medical knowledge when generating predictions remains open, additional interpretability tools are needed to foster trust and encourage the use of AI by clinicians. In this work we show an interpretable Deep Learning model trained on MIMIC-III to predict mortality inside the ICU using raw nursing notes, together with visual explanations for word importance based on the Shapley Value. Our model reaches a ROC of 0.8629 (±0.0058), outperforming the traditional SAPS-II score and a LSTM recurrent neural network baseline while providing enhanced interpretability when compared with similar Deep Learning approaches. Supporting code can be found at https://github.com/williamcaicedo/ISeeU2. © 2022 Elsevier Ltd |
publishDate |
2022 |
dc.date.issued.none.fl_str_mv |
2022 |
dc.date.accessioned.none.fl_str_mv |
2023-07-19T21:19:25Z |
dc.date.available.none.fl_str_mv |
2023-07-19T21:19:25Z |
dc.date.submitted.none.fl_str_mv |
2023 |
dc.type.coarversion.fl_str_mv |
http://purl.org/coar/version/c_b1a7d7d4d402bcce |
dc.type.coar.fl_str_mv |
http://purl.org/coar/resource_type/c_2df8fbb1 |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/article |
dc.type.hasversion.spa.fl_str_mv |
info:eu-repo/semantics/draft |
dc.type.spa.spa.fl_str_mv |
http://purl.org/coar/resource_type/c_6501 |
status_str |
draft |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/20.500.12585/12197 |
dc.identifier.doi.none.fl_str_mv |
10.1016/j.eswa.2022.117190 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Tecnológica de Bolívar |
dc.identifier.reponame.spa.fl_str_mv |
Repositorio Universidad Tecnológica de Bolívar |
url |
https://hdl.handle.net/20.500.12585/12197 |
identifier_str_mv |
10.1016/j.eswa.2022.117190 Universidad Tecnológica de Bolívar Repositorio Universidad Tecnológica de Bolívar |
dc.language.iso.spa.fl_str_mv |
eng |
language |
eng |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.uri.*.fl_str_mv |
http://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.cc.*.fl_str_mv |
Attribution-NonCommercial-NoDerivatives 4.0 Internacional |
rights_invalid_str_mv |
http://creativecommons.org/licenses/by-nc-nd/4.0/ Attribution-NonCommercial-NoDerivatives 4.0 Internacional http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.none.fl_str_mv |
32 páginas |
dc.format.mimetype.spa.fl_str_mv |
application/pdf |
dc.publisher.place.spa.fl_str_mv |
Cartagena de Indias |
dc.source.spa.fl_str_mv |
Expert Systems with Applications |
institution |
Universidad Tecnológica de Bolívar |
bitstream.url.fl_str_mv |
https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/1/2005.09284.pdf https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/2/license_rdf https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/3/license.txt https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/4/2005.09284.pdf.txt https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/5/2005.09284.pdf.jpg |
bitstream.checksum.fl_str_mv |
848bd2300a24b95eed83b914c9bb8e24 4460e5956bc1d1639be9ae6146a50347 e20ad307a1c5f3f25af9304a7a7c86b6 45dc17210c4cb0901e99113703983e59 e1f5a1c51100daaba58d701c556455cc |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Institucional UTB |
repository.mail.fl_str_mv |
repositorioutb@utb.edu.co |
_version_ |
1814021790127620096 |
spelling |
Caicedo-Torres, William865cbcee-ba06-417f-a6ae-50ba943243e3Gutierrez, Jairo32d064db-e471-4a23-9512-7b634356d9c92023-07-19T21:19:25Z2023-07-19T21:19:25Z20222023https://hdl.handle.net/20.500.12585/1219710.1016/j.eswa.2022.117190Universidad Tecnológica de BolívarRepositorio Universidad Tecnológica de BolívarAccurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand Deep Learning holds promise to positively impact clinical practice by leveraging medical data to assist diagnosis and prediction, including mortality prediction. However, as the question of whether powerful Deep Learning models attend correlations backed by sound medical knowledge when generating predictions remains open, additional interpretability tools are needed to foster trust and encourage the use of AI by clinicians. In this work we show an interpretable Deep Learning model trained on MIMIC-III to predict mortality inside the ICU using raw nursing notes, together with visual explanations for word importance based on the Shapley Value. Our model reaches a ROC of 0.8629 (±0.0058), outperforming the traditional SAPS-II score and a LSTM recurrent neural network baseline while providing enhanced interpretability when compared with similar Deep Learning approaches. Supporting code can be found at https://github.com/williamcaicedo/ISeeU2. © 2022 Elsevier Ltd32 páginasapplication/pdfenghttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAttribution-NonCommercial-NoDerivatives 4.0 Internacionalhttp://purl.org/coar/access_right/c_abf2Expert Systems with ApplicationsISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notesinfo:eu-repo/semantics/articleinfo:eu-repo/semantics/drafthttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/version/c_b1a7d7d4d402bccehttp://purl.org/coar/resource_type/c_2df8fbb1Imbalanced Data;Cost-Sensitive Learning;Data ClassificationLEMBCartagena de IndiasAbadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G., (...), Zheng, X. TensorFlow: Large-scale machine learning on heterogeneous distributed systems (2015) None, 1 (212), p. 19. Cited 130 times. URL http://download.tensorflow.org/paper/whitepaper2015.pdfBlagus, R., Lusa, L. SMOTE for high-dimensional class-imbalanced data (2013) BMC Bioinformatics, 14, art. no. 106. Cited 502 times. http://www.biomedcentral.com/1471-2105/14/106 doi: 10.1186/1471-2105-14-106Caicedo-Torres, W., Gutierrez, J. ISeeU: Visually interpretable deep learning for mortality prediction inside the ICU (2019) Journal of Biomedical Informatics, 98, art. no. 103269. Cited 43 times. http://www.elsevier.com/inca/publications/store/6/2/2/8/5/7/index.htt doi: 10.1016/j.jbi.2019.103269Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique (2002) Journal of Artificial Intelligence Research, 16, pp. 321-357. Cited 15650 times. http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume16/chawla02a.pdf doi: 10.1613/jair.953Che, Z., Purushotham, S., Cho, K., Sontag, D., Liu, Y. Recurrent neural networks for multivariate time series with missing values (2016) . Cited 90 times. CoRR, abs/1606.0. URLChen, B., Xia, S., Chen, Z., Wang, B., Wang, G. RSMOTE: A self-adaptive robust SMOTE for imbalanced problems with label noise (2021) Information Sciences, 553, pp. 397-428. Cited 61 times. http://www.journals.elsevier.com/information-sciences/ doi: 10.1016/j.ins.2020.10.013Cooper, G.F., Aliferis, C.F., Ambrosino, R., Aronis, J., Buchanan, B.G., Caruana, R., Fine, M.J., (...), Spirtes, P. An evaluation of machine-learning methods for predicting pneumonia mortality (1997) Artificial Intelligence in Medicine, 9 (2), pp. 107-138. Cited 124 times. doi: 10.1016/S0933-3657(96)00367-3Devlin, J., Chang, M.-W., Lee, K., Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding (2019) NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, 1, pp. 4171-4186. Cited 21944 times. ISBN: 978-195073713-0Emanuel, E.J., Persad, G., Upshur, R., Thome, B., Parker, M., Glickman, A., Zhang, C., (...), Phillips, J.P. Fair allocation of scarce medical resources in the time of covid-19 (2020) New England Journal of Medicine, 382 (21), pp. 2049-2055. Cited 1885 times. http://www.nejm.org/medical-index doi: 10.1056/NEJMsb2005114Gall, J.-R., Lemeshow, S., Saulnier, F. A New Simplified Acute Physiology Score (SAPS II) Based on a European/North American Multicenter Study (1993) JAMA: The Journal of the American Medical Association, 270 (24), pp. 2957-2963. Cited 5758 times. doi: 10.1001/jama.1993.03510240069035Grasselli, G., Pesenti, A., Cecconi, M. Critical Care Utilization for the COVID-19 Outbreak in Lombardy, Italy: Early Experience and Forecast during an Emergency Response (2020) JAMA - Journal of the American Medical Association, 323 (16), pp. 1545-1546. Cited 1411 times. http://jama.jamanetwork.com/journal.aspx doi: 10.1001/jama.2020.4031Hochreiter, S., Schmidhuber, J. Long Short-Term Memory (1997) Neural Computation, 9 (8), pp. 1735-1780. Cited 53930 times. http://www.mitpressjournals.org/loi/neco doi: 10.1162/neco.1997.9.8.1735Johnson, A.E.W., Pollard, T.J., Shen, L., Lehman, L.-W.H., Feng, M., Ghassemi, M., Moody, B., (...), Mark, R.G. MIMIC-III, a freely accessible critical care database (2016) Scientific Data, 3, art. no. 160035. Cited 3528 times. www.nature.com/sdata/ doi: 10.1038/sdata.2016.35Johnson, A.E.W., Stone, D.J., Celi, L.A., Pollard, T.J. The MIMIC Code Repository: Enabling reproducibility in critical care research (2018) Journal of the American Medical Informatics Association, 25 (1), art. no. ocx084, pp. 32-39. Cited 173 times. http://jamia.oxfordjournals.org/content/22/e1 doi: 10.1093/jamia/ocx084Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J. BioBERT: A pre-trained biomedical language representation model for biomedical text mining (2020) Bioinformatics, 36 (4), pp. 1234-1240. Cited 1854 times. http://bioinformatics.oxfordjournals.org/ doi: 10.1093/bioinformatics/btz682Lipton, Z.C., Kale, D., Wetzel, R. Directly modeling missing data in sequences with RNNs: Improved classification of clinical time series (2016) Proceedings of the 1st machine learning for healthcare conference, Proceedings of machine learning research, 56, pp. 253-270. Cited 167 times. Doshi-Velez F. Fackler J. Kale D. Wallace B. Weins J. PMLR Northeastern University, Boston, MA, USA URL http://proceedings.mlr.press/v56/Lipton16.htmlLundberg, S.M., Lee, S.-I. A unified approach to interpreting model predictions (Open Access) (2017) Advances in Neural Information Processing Systems, 2017-December, pp. 4766-4775. Cited 6213 times.Naseriparsa, M., Al-Shammari, A., Sheng, M., Zhang, Y., Zhou, R. RSMOTE: improving classification performance over imbalanced medical datasets (2020) Health Information Science and Systems, 8 (1), art. no. 22. Cited 14 times. http://www.springer.com/journal/13755/about doi: 10.1007/s13755-020-00112-wPurushotham, S., Meng, C., Che, Z., Liu, Y. Benchmarking deep learning models on large healthcare datasets (Open Access) (2018) Journal of Biomedical Informatics, 83, pp. 112-134. Cited 179 times. http://www.elsevier.com/inca/publications/store/6/2/2/8/5/7/index.htt doi: 10.1016/j.jbi.2018.04.007Rapsang, A.G., Shyam, D.C. Scoring systems in the intensive care unit: A compendium (Open Access) (2014) Indian Journal of Critical Care Medicine, 18 (4), pp. 220-228. Cited 106 times. https://www.ijccm.org/lov/IJCCM doi: 10.4103/0972-5229.130573Shen, D., Wu, G., Suk, H.-I. Deep Learning in Medical Image Analysis (2017) Annual Review of Biomedical Engineering, 19, pp. 221-248. Cited 2467 times. http://arjournals.annualreviews.org/loi/bioeng doi: 10.1146/annurev-bioeng-071516-044442Shickel, B., Tighe, P.J., Bihorac, A., Rashidi, P. Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis (2018) IEEE Journal of Biomedical and Health Informatics, 22 (5), art. no. 8086133, pp. 1589-1604. Cited 582 times. http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221020 doi: 10.1109/JBHI.2017.2767063Si, Y., Roberts, K. (2019) Deep Patient Representation of Clinical Notes via Multi-Task Learning for Mortality Prediction. In AMIA joint summits on translational science proceedings. AMIA joint summits on translational science.Sushil, M., Šuster, S., Luyckx, K., Daelemans, W. Patient representation learning and interpretable evaluation using clinical notes (Open Access) (2018) Journal of Biomedical Informatics, 84, pp. 103-113. Cited 26 times. http://www.elsevier.com/inca/publications/store/6/2/2/8/5/7/index.htt doi: 10.1016/j.jbi.2018.06.016http://purl.org/coar/resource_type/c_6501ORIGINAL2005.09284.pdf2005.09284.pdfapplication/pdf8820916https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/1/2005.09284.pdf848bd2300a24b95eed83b914c9bb8e24MD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/2/license_rdf4460e5956bc1d1639be9ae6146a50347MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-83182https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/3/license.txte20ad307a1c5f3f25af9304a7a7c86b6MD53TEXT2005.09284.pdf.txt2005.09284.pdf.txtExtracted texttext/plain39232https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/4/2005.09284.pdf.txt45dc17210c4cb0901e99113703983e59MD54THUMBNAIL2005.09284.pdf.jpg2005.09284.pdf.jpgGenerated Thumbnailimage/jpeg5827https://repositorio.utb.edu.co/bitstream/20.500.12585/12197/5/2005.09284.pdf.jpge1f5a1c51100daaba58d701c556455ccMD5520.500.12585/12197oai:repositorio.utb.edu.co:20.500.12585/121972023-07-21 00:17:34.231Repositorio Institucional UTBrepositorioutb@utb.edu.coQXV0b3Jpem8gKGF1dG9yaXphbW9zKSBhIGxhIEJpYmxpb3RlY2EgZGUgbGEgSW5zdGl0dWNpw7NuIHBhcmEgcXVlIGluY2x1eWEgdW5hIGNvcGlhLCBpbmRleGUgeSBkaXZ1bGd1ZSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBsYSBvYnJhIG1lbmNpb25hZGEgY29uIGVsIGZpbiBkZSBmYWNpbGl0YXIgbG9zIHByb2Nlc29zIGRlIHZpc2liaWxpZGFkIGUgaW1wYWN0byBkZSBsYSBtaXNtYSwgY29uZm9ybWUgYSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBxdWUgbWUobm9zKSBjb3JyZXNwb25kZShuKSB5IHF1ZSBpbmNsdXllbjogbGEgcmVwcm9kdWNjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSwgZGlzdHJpYnVjacOzbiBhbCBww7pibGljbywgdHJhbnNmb3JtYWNpw7NuLCBkZSBjb25mb3JtaWRhZCBjb24gbGEgbm9ybWF0aXZpZGFkIHZpZ2VudGUgc29icmUgZGVyZWNob3MgZGUgYXV0b3IgeSBkZXJlY2hvcyBjb25leG9zIHJlZmVyaWRvcyBlbiBhcnQuIDIsIDEyLCAzMCAobW9kaWZpY2FkbyBwb3IgZWwgYXJ0IDUgZGUgbGEgbGV5IDE1MjAvMjAxMiksIHkgNzIgZGUgbGEgbGV5IDIzIGRlIGRlIDE5ODIsIExleSA0NCBkZSAxOTkzLCBhcnQuIDQgeSAxMSBEZWNpc2nDs24gQW5kaW5hIDM1MSBkZSAxOTkzIGFydC4gMTEsIERlY3JldG8gNDYwIGRlIDE5OTUsIENpcmN1bGFyIE5vIDA2LzIwMDIgZGUgbGEgRGlyZWNjacOzbiBOYWNpb25hbCBkZSBEZXJlY2hvcyBkZSBhdXRvciwgYXJ0LiAxNSBMZXkgMTUyMCBkZSAyMDEyLCBsYSBMZXkgMTkxNSBkZSAyMDE4IHkgZGVtw6FzIG5vcm1hcyBzb2JyZSBsYSBtYXRlcmlhLgoKQWwgcmVzcGVjdG8gY29tbyBBdXRvcihlcykgbWFuaWZlc3RhbW9zIGNvbm9jZXIgcXVlOgoKLSBMYSBhdXRvcml6YWNpw7NuIGVzIGRlIGNhcsOhY3RlciBubyBleGNsdXNpdmEgeSBsaW1pdGFkYSwgZXN0byBpbXBsaWNhIHF1ZSBsYSBsaWNlbmNpYSB0aWVuZSB1bmEgdmlnZW5jaWEsIHF1ZSBubyBlcyBwZXJwZXR1YSB5IHF1ZSBlbCBhdXRvciBwdWVkZSBwdWJsaWNhciBvIGRpZnVuZGlyIHN1IG9icmEgZW4gY3VhbHF1aWVyIG90cm8gbWVkaW8sIGFzw60gY29tbyBsbGV2YXIgYSBjYWJvIGN1YWxxdWllciB0aXBvIGRlIGFjY2nDs24gc29icmUgZWwgZG9jdW1lbnRvLgoKLSBMYSBhdXRvcml6YWNpw7NuIHRlbmRyw6EgdW5hIHZpZ2VuY2lhIGRlIGNpbmNvIGHDsW9zIGEgcGFydGlyIGRlbCBtb21lbnRvIGRlIGxhIGluY2x1c2nDs24gZGUgbGEgb2JyYSBlbiBlbCByZXBvc2l0b3JpbywgcHJvcnJvZ2FibGUgaW5kZWZpbmlkYW1lbnRlIHBvciBlbCB0aWVtcG8gZGUgZHVyYWNpw7NuIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlbCBhdXRvciB5IHBvZHLDoSBkYXJzZSBwb3IgdGVybWluYWRhIHVuYSB2ZXogZWwgYXV0b3IgbG8gbWFuaWZpZXN0ZSBwb3IgZXNjcml0byBhIGxhIGluc3RpdHVjacOzbiwgY29uIGxhIHNhbHZlZGFkIGRlIHF1ZSBsYSBvYnJhIGVzIGRpZnVuZGlkYSBnbG9iYWxtZW50ZSB5IGNvc2VjaGFkYSBwb3IgZGlmZXJlbnRlcyBidXNjYWRvcmVzIHkvbyByZXBvc2l0b3Jpb3MgZW4gSW50ZXJuZXQgbG8gcXVlIG5vIGdhcmFudGl6YSBxdWUgbGEgb2JyYSBwdWVkYSBzZXIgcmV0aXJhZGEgZGUgbWFuZXJhIGlubWVkaWF0YSBkZSBvdHJvcyBzaXN0ZW1hcyBkZSBpbmZvcm1hY2nDs24gZW4gbG9zIHF1ZSBzZSBoYXlhIGluZGV4YWRvLCBkaWZlcmVudGVzIGFsIHJlcG9zaXRvcmlvIGluc3RpdHVjaW9uYWwgZGUgbGEgSW5zdGl0dWNpw7NuLCBkZSBtYW5lcmEgcXVlIGVsIGF1dG9yKHJlcykgdGVuZHLDoW4gcXVlIHNvbGljaXRhciBsYSByZXRpcmFkYSBkZSBzdSBvYnJhIGRpcmVjdGFtZW50ZSBhIG90cm9zIHNpc3RlbWFzIGRlIGluZm9ybWFjacOzbiBkaXN0aW50b3MgYWwgZGUgbGEgSW5zdGl0dWNpw7NuIHNpIGRlc2VhIHF1ZSBzdSBvYnJhIHNlYSByZXRpcmFkYSBkZSBpbm1lZGlhdG8uCgotIExhIGF1dG9yaXphY2nDs24gZGUgcHVibGljYWNpw7NuIGNvbXByZW5kZSBlbCBmb3JtYXRvIG9yaWdpbmFsIGRlIGxhIG9icmEgeSB0b2RvcyBsb3MgZGVtw6FzIHF1ZSBzZSByZXF1aWVyYSBwYXJhIHN1IHB1YmxpY2FjacOzbiBlbiBlbCByZXBvc2l0b3Jpby4gSWd1YWxtZW50ZSwgbGEgYXV0b3JpemFjacOzbiBwZXJtaXRlIGEgbGEgaW5zdGl0dWNpw7NuIGVsIGNhbWJpbyBkZSBzb3BvcnRlIGRlIGxhIG9icmEgY29uIGZpbmVzIGRlIHByZXNlcnZhY2nDs24gKGltcHJlc28sIGVsZWN0csOzbmljbywgZGlnaXRhbCwgSW50ZXJuZXQsIGludHJhbmV0LCBvIGN1YWxxdWllciBvdHJvIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2NlcikuCgotIExhIGF1dG9yaXphY2nDs24gZXMgZ3JhdHVpdGEgeSBzZSByZW51bmNpYSBhIHJlY2liaXIgY3VhbHF1aWVyIHJlbXVuZXJhY2nDs24gcG9yIGxvcyB1c29zIGRlIGxhIG9icmEsIGRlIGFjdWVyZG8gY29uIGxhIGxpY2VuY2lhIGVzdGFibGVjaWRhIGVuIGVzdGEgYXV0b3JpemFjacOzbi4KCi0gQWwgZmlybWFyIGVzdGEgYXV0b3JpemFjacOzbiwgc2UgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBlcyBvcmlnaW5hbCB5IG5vIGV4aXN0ZSBlbiBlbGxhIG5pbmd1bmEgdmlvbGFjacOzbiBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvcy4gRW4gY2FzbyBkZSBxdWUgZWwgdHJhYmFqbyBoYXlhIHNpZG8gZmluYW5jaWFkbyBwb3IgdGVyY2Vyb3MgZWwgbyBsb3MgYXV0b3JlcyBhc3VtZW4gbGEgcmVzcG9uc2FiaWxpZGFkIGRlbCBjdW1wbGltaWVudG8gZGUgbG9zIGFjdWVyZG9zIGVzdGFibGVjaWRvcyBzb2JyZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBsYSBvYnJhIGNvbiBkaWNobyB0ZXJjZXJvLgoKLSBGcmVudGUgYSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIHBvciB0ZXJjZXJvcywgZWwgbyBsb3MgYXV0b3JlcyBzZXLDoW4gcmVzcG9uc2FibGVzLCBlbiBuaW5nw7puIGNhc28gbGEgcmVzcG9uc2FiaWxpZGFkIHNlcsOhIGFzdW1pZGEgcG9yIGxhIGluc3RpdHVjacOzbi4KCi0gQ29uIGxhIGF1dG9yaXphY2nDs24sIGxhIGluc3RpdHVjacOzbiBwdWVkZSBkaWZ1bmRpciBsYSBvYnJhIGVuIMOtbmRpY2VzLCBidXNjYWRvcmVzIHkgb3Ryb3Mgc2lzdGVtYXMgZGUgaW5mb3JtYWNpw7NuIHF1ZSBmYXZvcmV6Y2FuIHN1IHZpc2liaWxpZGFkCgo= |