Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico

ilustraciones, fotografías, graficas

Autores:
Beltrán Barrera, Lillian Daniela
Tipo de recurso:
Fecha de publicación:
2023
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/83865
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/83865
https://repositorio.unal.edu.co/
Palabra clave:
000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación
Enfermedades del Nervio Óptico
Optic Nerve Diseases
Glaucoma
Escala DDLS
RDR
YOLO
Aprendizaje por transferencia
Redes neuronales convolucionales
Modelo de clasificación
Modelo de segmentación
Glaucoma
DDLS scale
RDR
YOLO
Transfer learning
Convolutional neural networks
Classification model
Segmentation model
Modelo de simulación
Simulation models
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
id UNACIONAL2_d8435cef415382d1b76c8056a401fe49
oai_identifier_str oai:repositorio.unal.edu.co:unal/83865
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.spa.fl_str_mv Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
dc.title.translated.eng.fl_str_mv Deep learning model to quantify the damage caused by Glaucoma in the optic nerve
title Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
spellingShingle Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación
Enfermedades del Nervio Óptico
Optic Nerve Diseases
Glaucoma
Escala DDLS
RDR
YOLO
Aprendizaje por transferencia
Redes neuronales convolucionales
Modelo de clasificación
Modelo de segmentación
Glaucoma
DDLS scale
RDR
YOLO
Transfer learning
Convolutional neural networks
Classification model
Segmentation model
Modelo de simulación
Simulation models
title_short Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
title_full Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
title_fullStr Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
title_full_unstemmed Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
title_sort Modelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio óptico
dc.creator.fl_str_mv Beltrán Barrera, Lillian Daniela
dc.contributor.advisor.none.fl_str_mv Perdomo Charry, Oscar Julian
Gonzalez Osorio, Fabio Augusto
dc.contributor.author.none.fl_str_mv Beltrán Barrera, Lillian Daniela
dc.contributor.researchgroup.spa.fl_str_mv Mindlab
dc.subject.ddc.spa.fl_str_mv 000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación
topic 000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación
Enfermedades del Nervio Óptico
Optic Nerve Diseases
Glaucoma
Escala DDLS
RDR
YOLO
Aprendizaje por transferencia
Redes neuronales convolucionales
Modelo de clasificación
Modelo de segmentación
Glaucoma
DDLS scale
RDR
YOLO
Transfer learning
Convolutional neural networks
Classification model
Segmentation model
Modelo de simulación
Simulation models
dc.subject.decs.spa.fl_str_mv Enfermedades del Nervio Óptico
dc.subject.decs.eng.fl_str_mv Optic Nerve Diseases
dc.subject.proposal.spa.fl_str_mv Glaucoma
Escala DDLS
RDR
YOLO
Aprendizaje por transferencia
Redes neuronales convolucionales
Modelo de clasificación
Modelo de segmentación
dc.subject.proposal.eng.fl_str_mv Glaucoma
DDLS scale
RDR
YOLO
Transfer learning
Convolutional neural networks
Classification model
Segmentation model
dc.subject.unesco.none.fl_str_mv Modelo de simulación
Simulation models
description ilustraciones, fotografías, graficas
publishDate 2023
dc.date.accessioned.none.fl_str_mv 2023-05-25T15:38:35Z
dc.date.available.none.fl_str_mv 2023-05-25T15:38:35Z
dc.date.issued.none.fl_str_mv 2023-04
dc.type.spa.fl_str_mv Trabajo de grado - Maestría
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/masterThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TM
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/83865
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/83865
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.references.spa.fl_str_mv [25] M. N. Bajwa, G. A. P. Singh, W. Neumeier, M. I. Malik, A. Dengel, and S. Ahmed, “G1020: A benchmark retinal fundus image dataset for Computer-Aided Glaucoma Detection,” arXiv, no. May, 2020
[26] E. L. Mayro, M. Wang, T. Elze, and L. R. Pasquale, “The impact of artificial intelligence in the diagnosis and management of glaucoma,” Eye (Basingstoke), vol. 34, no. 1, 2020. [Online]. Available: http://dx.doi.org/10.1038/s41433-019-0577-x
[27] N. Mojab, V. Noroozi, P. S. Yu, and J. A. Hallak, “Deep multi-task learning for inter- pretable glaucoma detection,” Proceedings - 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science, IRI 2019, pp. 167–174, 2019.
[28] J. Martins, J. S. Cardoso, and F. Soares, “Offline computer-aided diagnosis for Glauco- ma detection using fundus images targeted at mobile devices,” Computer Methods and Programs in Biomedicine, vol. 192, 2020.
[29] U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, and U. R. Acharya, “Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images,” Information Sciences, vol. 441, pp. 41–49, 2018. [Online]. Available: https://doi.org/10.1016/j.ins.2018.01.051
[30] A. Thakur, M. Goldbaum, and S. Yousefi, “Predicting Glaucoma before Onset Using Deep Learning,” Ophthalmology. Glaucoma, vol. 3, no. 4, pp. 262–268, 2020.
[31] V. G. Edupuganti, A. Chawla, and A. Kale, “Automatic optic disk and cup segmentation of fundus images using deep learning,” Proceedings - International Conference on Image Processing, ICIP, pp. 2227–2231, 2018
[32] J. B. Jonas, A. Bergua, P. Schmitz-Valckenberg, K. I. Papastathopoulos, and W. M. Budde, “Ranking of optic disc variables for detection of glaucomatous optic nerve da- mage,” Investigative Ophthalmology & Visual Science, vol. 41, no. 7, pp. 1764–1773, 2000
[33] A. Pal, M. R. Moorthy, and A. Shahina, “G-Eyenet: A Convolutional Autoencoding Classifier Framework for the Detection of Glaucoma from Retinal Fundus Images,” Proceedings - International Conference on Image Processing, ICIP, pp. 2775–2779, 2018.
[34] Z. Xiao, X. Zhang, L. Geng, F. Zhang, J. Wu, and Y. Liu, “Research on the method of color fundus image optic cup segmentation based on deep learning,” Symmetry, vol. 11, no. 7, 2019.
[35] S. Yu, D. Xiao, S. Frost, and Y. Kanagasingam, “Robust optic disc and cup segmentation with deep learning for glaucoma detection,” Computerized Medical Imaging and Graphics, vol. 74, pp. 61–71, 2019. [Online]. Available: https://doi.org/10.1016/j.compmedimag.2019.02.005
[36] G. L. Spaeth, J. Henderer, C. Liu, M. Kesen, U. Altangerel, A. Bayer, L. J. Katz, J. Myers, D. Rhee, and W. Steinmann, “The disc damage likelihood scale: reprodu- cibility of a new method of estimating the amount of optic nerve damage caused by glaucoma.” Transactions of the American Ophthalmological Society, vol. 100, p. 181, 2002.
[37] A. C. Thompson, A. A. Jammal, S. I. Berchuck, E. B. Mariottoni, and F. A. Medeiros, “Assessment of a Segmentation-Free Deep Learning Algorithm for Diagnosing Glaucoma from Optical Coherence Tomography Scans,” JAMA Ophthalmology, vol. 138, no. 4, pp. 333–339, 2020.
[38] I. El Naqa and M. J. Murphy, “What is machine learning?” in machine learning in radiation oncology. Springer, 2015, pp. 3–11.
[39] A. R. Pathak, M. Pandey, and S. Rautaray, “Deep learning approaches for detecting objects from images: a review,” Progress in Computing, Analytics and Networking, pp. 491–499, 2018.
[40] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
[41] X. Wang, M. Yang, S. Zhu, and Y. Lin, “Regionlets for generic object detection,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 17–24.
[42] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on com- puter vision, 2015, pp. 1440–1448.
[43] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real- time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
[44] P. Tyagi, T. Singh, R. Nayar, and S. Kumar, “Performance comparison and analysis of medical image segmentation techniques,” in 2018 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC). IEEE, 2018, pp. 1–6.
[45] R. C. Gonzalez, Digital image processing. Pearson education india, 2009.
[46] W.-X. Kang, Q.-Q. Yang, and R.-P. Liang, “The comparative research on image seg- mentation algorithms,” in 2009 First international workshop on education technology and computer science, vol. 2. IEEE, 2009, pp. 703–707.
[47] B. A. Skourt, A. El Hassani, and A. Majda, “Lung ct image segmentation using deep neural networks,” Procedia Computer Science, vol. 127, pp. 109–113, 2018.
[48] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder- decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
[49] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L. Baxter, A. Mc- Keown, G. Yang, X. Wu, F. Yan et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018.
[50] J. Ding, B. Chen, H. Liu, and M. Huang, “Convolutional neural network with data augmentation for sar target recognition,” IEEE Geoscience and remote sensing letters, vol. 13, no. 3, pp. 364–368, 2016.
[51] T. Rahman, M. E. Chowdhury, A. Khandakar, K. R. Islam, K. F. Islam, Z. B. Mahbub, M. A. Kadir, and S. Kashem, “Transfer learning with deep convolutional neural network (cnn) for pneumonia detection using chest x-ray,” Applied Sciences, vol. 10, no. 9, p. 3233, 2020.
[52] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” Advances in neural information processing systems, vol. 30, 2017.
[53] M. Christopher, A. Belghith, C. Bowd, J. A. Proudfoot, M. H. Goldbaum, R. N. Wein- reb, C. A. Girkin, J. M. Liebmann, and L. M. Zangwill, “Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs,” Scientific Reports, vol. 8, no. 1, pp. 1–13, 2018.
[54] A. Saxena, A. Vyas, L. Parashar, and U. Singh, “A Glaucoma Detection using Convo- lutional Neural Network,” Proceedings of the International Conference on Electronics and Sustainable Communication Systems, ICESC 2020, no. Icesc, pp. 815–820, 2020.
[55] K. Park, J. Kim, and J. Lee, “Automatic optic nerve head localization and cup-to- disc ratio detection using state-of-the-art deep-learning architectures,” Scientific reports, vol. 10, no. 1, pp. 1–10, 2020.
[56] N. Shibata, M. Tanito, K. Mitsuhashi, Y. Fujino, M. Matsuura, H. Murata, and R. Asaoka, “Development of a deep residual learning algorithm to screen for glaucoma from fundus photography,” Scientific Reports, vol. 8, no. 1, pp. 1–9, 2018. [Online]. Available: http://dx.doi.org/10.1038/s41598-018-33013-w
[57] L. Li, M. Xu, H. Liu, Y. Li, X. Wang, L. Jiang, Z. Wang, X. Fan, and N. Wang, “A Large-Scale Database and a CNN Model for Attention-Based Glaucoma Detection,” IEEE Transactions on Medical Imaging, vol. 39, no. 2, pp. 413–424, 2020.
[58] B. Al-Bander, W. Al-Nuaimy, M. A. Al-Taee, and Y. Zheng, “Automated glaucoma diagnosis using deep learning approach,” 2017 14th International Multi-Conference on Systems, Signals and Devices, SSD 2017, vol. 2017-Janua, pp. 207–210, 2017.
[59] A. Soltani, T. Battikh, I. Jabri, and N. Lakhoua, “A new expert system based on fuzzy logic and image processing algorithms for early glaucoma diagnosis,” Biomedical Signal Processing and Control, vol. 40, pp. 366–377, 2018. [Online]. Available: http://dx.doi.org/10.1016/j.bspc.2017.10.009
[60] S. H. Lu, K. Y. Lee, J. I. T. Chong, A. K. Lam, J. S. Lai, and D. C. Lam, “Compa- rison of Ocular Biomechanical Machine Learning Classifiers for Glaucoma Diagnosis,” Proceedings - 2018 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2018, pp. 2539–2543, 2019.
[61] Z.-H. Su and J.-C. Yen, “Cup-to-disk ratio detection of optic disk in fundus images based on yolov5,” in 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021, pp. 327–329.
[62] N. Thakur and M. Juneja, “Classification of glaucoma using hybrid features with machi- ne learning approaches,” Biomedical Signal Processing and Control, vol. 62, p. 102137, 2020.
[63] L. K. Singh, M. Khanna, S. Thawkar, and R. Singh, “Nature-inspired computing and machine learning based classification approach for glaucoma in retinal fundus images,” Multimedia Tools and Applications, pp. 1–49, 2023.
[64] J. I. Orlando, H. Fu, J. B. Breda, K. van Keer, D. R. Bathula, A. Diaz-Pinto, R. Fang, P.-A. Heng, J. Kim, J. Lee et al., “Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs,” Medical image analysis, vol. 59, p. 101570, 2020.
[65] F. J. F. Batista, T. Diaz-Aleman, J. Sigut, S. Alayon, R. Arnay, and D. Angel-Pereira, “Rim-one dl: A unified retinal image database for assessing glaucoma using deep lear- ning,” Image Analysis & Stereology, vol. 39, no. 3, pp. 161–167, 2020.
[66] O. Kovalyk, J. Morales-S´anchez, R. Verdu´-Monedero, I. Sell´es-Navarro, A. Palaz´on- Cabanes, and J.-L. Sancho-G´omez, “Papila: Dataset with fundus images and clinical data of both eyes of the same patient for glaucoma assessment,” Scientific Data, vol. 9, no. 1, pp. 1–12, 2022.
[67] G. Bradski and A. Kaehler, “Opencv,” Dr. Dobb’s journal of software tools, vol. 3, p. 120, 2000.
[68] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
[69] N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access, vol. 9, pp. 82 031–82 057, 2021.
[70] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-first AAAI conference on artificial intelligence, 2017.
[71] H. Zhang, M. Tian, G. Shao, J. Cheng, and J. Liu, “Target detection of forward-looking sonar image based on improved yolov5,” IEEE Access, vol. 10, pp. 18 023–18 034, 2022.
[1] N. Pardiñas Barón, F. Fernandez Fernández, F. Fondevila Camps, M. Giner Muñoz, and M. Ara Báguena, “Retinopatía de gran altura,” Archivos de la Sociedad Española de Oftalmología, vol. 87, no. 10, pp. 337–339, 2012.
[2] M. Pahlitzsch, N. Torun, C. Erb, J. Bruenner, A. K. B. Maier, J. Gonnermann, E. Ber- telmann, and M. K. Klamann, “Significance of the disc damage likelihood scale objecti- vely measured by a non-mydriatic fundus camera in preperimetric glaucoma,” Clinical Ophthalmology (Auckland, NZ), vol. 9, p. 2147, 2015.
[3] I. Katsamenis, E. E. Karolou, A. Davradou, E. Protopapadakis, A. Doulamis, N. Dou- lamis, and D. Kalogeras, “Tracon: A novel dataset for real-time traffic cones detection using deep learning,” arXiv preprint arXiv:2205.11830, 2022.
[4] J. D. Henderer, “Disc damage likelihood scale,” British Journal of Ophthalmology, vol. 90, no. 4, pp. 395–396, 2006.
[5] D. J. Smits, T. Elze, H. Wang, and L. R. Pasquale, “Machine Learning in the Detection of the Glaucomatous Disc and Visual Field,” Seminars in Ophthalmology, vol. 34, no. 4, pp. 232–242, 2019. [Online]. Available: https://doi.org/10.1080/08820538.2019.1620801
[6] F. Li, L. Yan, Y. Wang, J. Shi, H. Chen, X. Zhang, M. Jiang, Z. Wu, and K. Zhou, “Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs,” Graefe’s Archive for Clinical and Experimental Ophthalmology, vol. 258, no. 4, pp. 851–867, 2020.
[7] S. Borwankar, R. Sen, and B. Kakani, “Improved Glaucoma Diagnosis Using Deep Learning,” Proceedings of CONECCT 2020 - 6th IEEE International Conference on Electronics, Computing and Communication Technologies, pp. 2–5, 2020.
[8] M. S. Haleem, L. Han, J. van Hemert, and B. Li, “Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review,” Computerized Medical Imaging and Graphics, vol. 37, no. 7-8, pp. 581–596, 2013. [Online]. Available: http://dx.doi.org/10.1016/j.compmedimag.2013.09.005
[9] M. Kim, J. C. Han, S. H. Hyun, O. Janssens, S. Van Hoecke, C. Kee, and W. De Neve, “Medinoid: Computer-aided diagnosis and localization of glaucoma using deep learning,” Applied Sciences (Switzerland), vol. 9, no. 15, 2019.
[10] S. K. Devalla, Z. Liang, T. H. Pham, C. Boote, N. G. Strouthidis, A. H. Thiery, and M. J. Girard, “Glaucoma management in the era of artificial intelligence,” British Journal of Ophthalmology, vol. 104, no. 3, pp. 301–311, 2020.
[11] S. Sreng, N. Maneerat, K. Hamamoto, and K. Y. Win, “Deep learning for optic disc seg- mentation and glaucoma diagnosis on retinal images,” Applied Sciences (Switzerland), vol. 10, no. 14, 2020.
[12] H. Fu, F. Li, Y. Xu, J. Liao, J. Xiong, J. Shen, J. Liu, X. Zhang, C. Yang, F. Lin, H. Luo, H. Li, H. Che, N. Li, and Y. Fan, “A retrospective comparison of deep learning to manual annotations for optic disc and optic cup segmentation in fundus photos,” medRxiv, pp. 1–11, 2020.
[13] S. Malik, N. Kanwal, M. N. Asghar, M. A. A. Sadiq, I. Karamat, and M. Fleury, “Data driven approach for eye disease classification with machine learning,” Applied Sciences (Switzerland), vol. 9, no. 14, 2019.
[14] A. C. Thompson, A. A. Jammal, and F. A. Medeiros, “A review of deep learning for screening, diagnosis, and detection of glaucoma progression,” Translational Vision Science and Technology, vol. 9, no. 2, pp. 1–19, 2020.
[15] H. Liu, L. Li, I. M. Wormstone, C. Qiao, C. Zhang, P. Liu, S. Li, H. Wang, D. Mou, R. Pang, D. Yang, L. M. Zangwill, S. Moghimi, H. Hou, C. Bowd, L. Jiang, Y. Chen, M. Hu, Y. Xu, H. Kang, X. Ji, R. Chang, C. Tham, C. Cheung, D. S. W. Ting, T. Y. Wong, Z. Wang, R. N. Weinreb, M. Xu, and N. Wang, “Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs,” JAMA Ophthalmology, vol. 137, no. 12, pp. 1353–1360, 2019.
[16] Z. Li, S. Keel, C. Liu, and M. He, “Can artificial intelligence make screening faster, more accurate, and more accessible?” Asia-Pacific Journal of Ophthalmology, vol. 7, no. 6, pp. 436–441, 2018.
[17] R. Kapoor, S. P. Walters, and L. A. Al-Aswad, “The current state of artificial intelligence in ophthalmology,” Survey of Ophthalmology, vol. 64, no. 2, pp. 233–240, 2019. [Online]. Available: https://doi.org/10.1016/j.survophthal.2018.09.002
[18] I. J. MacCormick, B. M. Williams, Y. Zheng, K. Li, B. Al-Bander, S. Czanner, R. Cheeseman, C. E. Willoughby, E. N. Brown, G. L. Spaeth, and G. Czanner, “Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile,” PLoS ONE, vol. 14, no. 1, pp. 1–20, 2019. [Online]. Available: http://dx.doi.org/10.1371/journal.pone.0209409
[19] Z. Tan, J. Scheetz, and M. He, “Artificial intelligence in ophthalmology: Accuracy, cha- llenges, and clinical application,” Asia-Pacific Journal of Ophthalmology, vol. 8, no. 3, pp. 197–199, 2019.
[20] A. R. Ran, C. C. Tham, P. P. Chan, C. Y. Cheng, Y. C. Tham, T. H. Rim, and C. Y. Cheung, “Deep learning in glaucoma with optical coherence tomography: a review,” Eye (Basingstoke), vol. 35, no. 1, pp. 188–201, 2021. [Online]. Available: http://dx.doi.org/10.1038/s41433-020-01191-5
[21] F. Abdullah, R. Imtiaz, H. A. Madni, H. A. Khan, T. M. Khan, M. A. Khan, and S. S. Naqvi, “A Review on Glaucoma Disease Detection using Computerized Techniques,” IEEE Access, pp. 37 311–37 333, 2021.
[22] M. Alghamdi and M. Abdel-Mottaleb, “A Comparative Study of Deep Learning Models for Diagnosing Glaucoma From Fundus Images,” IEEE Access, vol. 9, pp. 23 894–23 906, 2021.
[23] A. M. Stefan, E. A. Paraschiv, S. Ovreiu, and E. Ovreiu, “A review of glaucoma detec- tion from digital fundus images using machine learning techniques,” 2020 8th E-Health and Bioengineering Conference, EHB 2020, pp. 20–23, 2020.
[24] D. Mirzania, A. C. Thompson, and K. W. Muir, “Applications of deep learning in detection of glaucoma: A systematic review,” European Journal of Ophthalmology, 2020.
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial 4.0 Internacional
http://creativecommons.org/licenses/by-nc/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv xiv, 59 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación
dc.publisher.faculty.spa.fl_str_mv Facultad de Ingeniería
dc.publisher.place.spa.fl_str_mv Bogotá, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Bogotá
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/83865/1/license.txt
https://repositorio.unal.edu.co/bitstream/unal/83865/2/1026591991.2023.pdf
https://repositorio.unal.edu.co/bitstream/unal/83865/3/1026591991.2023.pdf.jpg
bitstream.checksum.fl_str_mv eb34b1cf90b7e1103fc9dfd26be24b4a
75bc82991e123daaf9074988a22d8606
4e4f8f783c27baf41caf049e6c404a87
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814089949902798848
spelling Atribución-NoComercial 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Perdomo Charry, Oscar Julianc280ba13fd48e8dbf9cdbc8179aa9c94Gonzalez Osorio, Fabio Augusto0e9d70b5c1d7448338ca4467ccb27e59Beltrán Barrera, Lillian Danielaeb0767c0c25ee22af0721c61a7542ba0Mindlab2023-05-25T15:38:35Z2023-05-25T15:38:35Z2023-04https://repositorio.unal.edu.co/handle/unal/83865Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/ilustraciones, fotografías, graficasEl glaucoma es una de las enfermedades de mayor prevalencia y gravedad en el mundo, se caracteriza por provocar una pérdida gradual de la visión periférica, que si no se trata a tiempo, puede ser irreversible y conducir a la pérdida total de la visión. Con el objetivo de facilitar la detección temprana de esta enfermedad, se han propuesto diversos modelos basados en aprendizaje profundo y redes neuronales convolucionales que permiten un diagnóstico automatizado. A pesar de su utilidad, estos modelos presentan algunas limitaciones, como la evaluación del ancho del borde neurorretiniano solamente de forma vertical y la asignación de una clasificación binaria para denotar la presencia o ausencia de la enfermedad, lo que dificulta la identificación de su estadio y del avance de la enfermedad en múltiples direcciones. Por tal motivo, este trabajo presenta un enfoque basado en aprendizaje profundo que toma como referencia la escala DDLS (Disc Damage Likelihood Scale) para detectar y conocer el avance del glaucoma en los pacientes. Para ello, se utilizó como insumo el conjunto de imágenes REFUGE (Retinal Fundus Glaucoma Challenge), identificando la región de interés (ROI por sus siglas en inglés) mediante el algoritmo de detección de objetos YOLO (You Only Look Once).Después de esto, se procedió a realizar la medición del RDR (Rim-to-Disc Ratio) en cada grado en las imágenes segmentadas utilizando dos modelos previamente entrenados: uno para el disco y otro para la copa ocular. De esta manera, se logró asignar nuevas etiquetas a las imágenes con base la escala DDLS. Luego, se entrenó un modelo base con las etiquetas originales, el cual se comparó con tres modelos entrenados mediante aprendizaje por transferencia con las etiquetas construidas. Estos modelos utilizaron diferentes técnicas para el procesamiento de las imágenes, incluyendo la conversión de coordenadas cartesianas a polares y el recorte de las imágenes en estéreo centradas en el nervio óptico a una dimensión de 224 × 224 píxeles para contar con mayor información de la imagen. Los mejores resultados fueron obtenidos por el modelo entrenado con las imágenes convertidas a coordenadas polares. (Texto tomado de la fuente)Glaucoma is one of the most prevalent and severe diseases in the world, characterized by a gradual loss of peripheral vision that, if not treated in time, can be irreversible and lead to total vision loss. In order to facilitate early detection of this disease, various models based on deep learning and convolutional neural networks have been proposed, which allow for automated diagnosis. Despite their usefulness, these models present some limitations, such as the evaluation of neuroretinal border width only vertically and the assignment of a binary classification to denote the presence or absence of the disease, which makes it difficult to identify its stage and the progression of the disease in multiple directions. For this reason, this work presents a deep learning-based approach that uses the DDLS (Disc Damage Likelihood Scale) scale to detect and understand the progression of glaucoma in patients. For this purpose, the REFUGE (Retinal Fundus Glaucoma Challenge) image set was used as input, identifying the region of interest (ROI) using the YOLO (You Only Look Once) object detection algorithm. After this, the RDR (Rim-to-Disc Ratio) was measured at each degree in the segmented images using two previously trained models: one for the disc and one for the optic cup. In this way, new labels were assigned to the images based on the DDLS scale. Then, a baseline model was trained with the original labels, which was compared with three models trained by transfer learning with the constructed labels. These models used different techniques for image processing, including the conversion of Cartesian coordinates to polar coordinates and the cropping of stereo images centered on the optic nerve to a dimension of 224 × 224 pixels to obtain more information from the image. The best results were obtained by the model trained with images converted to polar coordinates.MaestríaMagíster en Ingeniería - Ingeniería de Sistemas y Computaciónxiv, 59 páginasapplication/pdfspaUniversidad Nacional de ColombiaBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y ComputaciónFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computaciónEnfermedades del Nervio ÓpticoOptic Nerve DiseasesGlaucomaEscala DDLSRDRYOLOAprendizaje por transferenciaRedes neuronales convolucionalesModelo de clasificaciónModelo de segmentaciónGlaucomaDDLS scaleRDRYOLOTransfer learningConvolutional neural networksClassification modelSegmentation modelModelo de simulaciónSimulation modelsModelo de aprendizaje profundo para cuantificar el daño causado por el Glaucoma en el nervio ópticoDeep learning model to quantify the damage caused by Glaucoma in the optic nerveTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TM[25] M. N. Bajwa, G. A. P. Singh, W. Neumeier, M. I. Malik, A. Dengel, and S. Ahmed, “G1020: A benchmark retinal fundus image dataset for Computer-Aided Glaucoma Detection,” arXiv, no. May, 2020[26] E. L. Mayro, M. Wang, T. Elze, and L. R. Pasquale, “The impact of artificial intelligence in the diagnosis and management of glaucoma,” Eye (Basingstoke), vol. 34, no. 1, 2020. [Online]. Available: http://dx.doi.org/10.1038/s41433-019-0577-x[27] N. Mojab, V. Noroozi, P. S. Yu, and J. A. Hallak, “Deep multi-task learning for inter- pretable glaucoma detection,” Proceedings - 2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science, IRI 2019, pp. 167–174, 2019.[28] J. Martins, J. S. Cardoso, and F. Soares, “Offline computer-aided diagnosis for Glauco- ma detection using fundus images targeted at mobile devices,” Computer Methods and Programs in Biomedicine, vol. 192, 2020.[29] U. Raghavendra, H. Fujita, S. V. Bhandary, A. Gudigar, J. H. Tan, and U. R. Acharya, “Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images,” Information Sciences, vol. 441, pp. 41–49, 2018. [Online]. Available: https://doi.org/10.1016/j.ins.2018.01.051[30] A. Thakur, M. Goldbaum, and S. Yousefi, “Predicting Glaucoma before Onset Using Deep Learning,” Ophthalmology. Glaucoma, vol. 3, no. 4, pp. 262–268, 2020.[31] V. G. Edupuganti, A. Chawla, and A. Kale, “Automatic optic disk and cup segmentation of fundus images using deep learning,” Proceedings - International Conference on Image Processing, ICIP, pp. 2227–2231, 2018[32] J. B. Jonas, A. Bergua, P. Schmitz-Valckenberg, K. I. Papastathopoulos, and W. M. Budde, “Ranking of optic disc variables for detection of glaucomatous optic nerve da- mage,” Investigative Ophthalmology & Visual Science, vol. 41, no. 7, pp. 1764–1773, 2000[33] A. Pal, M. R. Moorthy, and A. Shahina, “G-Eyenet: A Convolutional Autoencoding Classifier Framework for the Detection of Glaucoma from Retinal Fundus Images,” Proceedings - International Conference on Image Processing, ICIP, pp. 2775–2779, 2018.[34] Z. Xiao, X. Zhang, L. Geng, F. Zhang, J. Wu, and Y. Liu, “Research on the method of color fundus image optic cup segmentation based on deep learning,” Symmetry, vol. 11, no. 7, 2019.[35] S. Yu, D. Xiao, S. Frost, and Y. Kanagasingam, “Robust optic disc and cup segmentation with deep learning for glaucoma detection,” Computerized Medical Imaging and Graphics, vol. 74, pp. 61–71, 2019. [Online]. Available: https://doi.org/10.1016/j.compmedimag.2019.02.005[36] G. L. Spaeth, J. Henderer, C. Liu, M. Kesen, U. Altangerel, A. Bayer, L. J. Katz, J. Myers, D. Rhee, and W. Steinmann, “The disc damage likelihood scale: reprodu- cibility of a new method of estimating the amount of optic nerve damage caused by glaucoma.” Transactions of the American Ophthalmological Society, vol. 100, p. 181, 2002.[37] A. C. Thompson, A. A. Jammal, S. I. Berchuck, E. B. Mariottoni, and F. A. Medeiros, “Assessment of a Segmentation-Free Deep Learning Algorithm for Diagnosing Glaucoma from Optical Coherence Tomography Scans,” JAMA Ophthalmology, vol. 138, no. 4, pp. 333–339, 2020.[38] I. El Naqa and M. J. Murphy, “What is machine learning?” in machine learning in radiation oncology. Springer, 2015, pp. 3–11.[39] A. R. Pathak, M. Pandey, and S. Rautaray, “Deep learning approaches for detecting objects from images: a review,” Progress in Computing, Analytics and Networking, pp. 491–499, 2018.[40] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.[41] X. Wang, M. Yang, S. Zhu, and Y. Lin, “Regionlets for generic object detection,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 17–24.[42] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on com- puter vision, 2015, pp. 1440–1448.[43] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real- time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.[44] P. Tyagi, T. Singh, R. Nayar, and S. Kumar, “Performance comparison and analysis of medical image segmentation techniques,” in 2018 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC). IEEE, 2018, pp. 1–6.[45] R. C. Gonzalez, Digital image processing. Pearson education india, 2009.[46] W.-X. Kang, Q.-Q. Yang, and R.-P. Liang, “The comparative research on image seg- mentation algorithms,” in 2009 First international workshop on education technology and computer science, vol. 2. IEEE, 2009, pp. 703–707.[47] B. A. Skourt, A. El Hassani, and A. Majda, “Lung ct image segmentation using deep neural networks,” Procedia Computer Science, vol. 127, pp. 109–113, 2018.[48] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder- decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.[49] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L. Baxter, A. Mc- Keown, G. Yang, X. Wu, F. Yan et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018.[50] J. Ding, B. Chen, H. Liu, and M. Huang, “Convolutional neural network with data augmentation for sar target recognition,” IEEE Geoscience and remote sensing letters, vol. 13, no. 3, pp. 364–368, 2016.[51] T. Rahman, M. E. Chowdhury, A. Khandakar, K. R. Islam, K. F. Islam, Z. B. Mahbub, M. A. Kadir, and S. Kashem, “Transfer learning with deep convolutional neural network (cnn) for pneumonia detection using chest x-ray,” Applied Sciences, vol. 10, no. 9, p. 3233, 2020.[52] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” Advances in neural information processing systems, vol. 30, 2017.[53] M. Christopher, A. Belghith, C. Bowd, J. A. Proudfoot, M. H. Goldbaum, R. N. Wein- reb, C. A. Girkin, J. M. Liebmann, and L. M. Zangwill, “Performance of Deep Learning Architectures and Transfer Learning for Detecting Glaucomatous Optic Neuropathy in Fundus Photographs,” Scientific Reports, vol. 8, no. 1, pp. 1–13, 2018.[54] A. Saxena, A. Vyas, L. Parashar, and U. Singh, “A Glaucoma Detection using Convo- lutional Neural Network,” Proceedings of the International Conference on Electronics and Sustainable Communication Systems, ICESC 2020, no. Icesc, pp. 815–820, 2020.[55] K. Park, J. Kim, and J. Lee, “Automatic optic nerve head localization and cup-to- disc ratio detection using state-of-the-art deep-learning architectures,” Scientific reports, vol. 10, no. 1, pp. 1–10, 2020.[56] N. Shibata, M. Tanito, K. Mitsuhashi, Y. Fujino, M. Matsuura, H. Murata, and R. Asaoka, “Development of a deep residual learning algorithm to screen for glaucoma from fundus photography,” Scientific Reports, vol. 8, no. 1, pp. 1–9, 2018. [Online]. Available: http://dx.doi.org/10.1038/s41598-018-33013-w[57] L. Li, M. Xu, H. Liu, Y. Li, X. Wang, L. Jiang, Z. Wang, X. Fan, and N. Wang, “A Large-Scale Database and a CNN Model for Attention-Based Glaucoma Detection,” IEEE Transactions on Medical Imaging, vol. 39, no. 2, pp. 413–424, 2020.[58] B. Al-Bander, W. Al-Nuaimy, M. A. Al-Taee, and Y. Zheng, “Automated glaucoma diagnosis using deep learning approach,” 2017 14th International Multi-Conference on Systems, Signals and Devices, SSD 2017, vol. 2017-Janua, pp. 207–210, 2017.[59] A. Soltani, T. Battikh, I. Jabri, and N. Lakhoua, “A new expert system based on fuzzy logic and image processing algorithms for early glaucoma diagnosis,” Biomedical Signal Processing and Control, vol. 40, pp. 366–377, 2018. [Online]. Available: http://dx.doi.org/10.1016/j.bspc.2017.10.009[60] S. H. Lu, K. Y. Lee, J. I. T. Chong, A. K. Lam, J. S. Lai, and D. C. Lam, “Compa- rison of Ocular Biomechanical Machine Learning Classifiers for Glaucoma Diagnosis,” Proceedings - 2018 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2018, pp. 2539–2543, 2019.[61] Z.-H. Su and J.-C. Yen, “Cup-to-disk ratio detection of optic disk in fundus images based on yolov5,” in 2021 International Conference on Electronic Communications, Internet of Things and Big Data (ICEIB). IEEE, 2021, pp. 327–329.[62] N. Thakur and M. Juneja, “Classification of glaucoma using hybrid features with machi- ne learning approaches,” Biomedical Signal Processing and Control, vol. 62, p. 102137, 2020.[63] L. K. Singh, M. Khanna, S. Thawkar, and R. Singh, “Nature-inspired computing and machine learning based classification approach for glaucoma in retinal fundus images,” Multimedia Tools and Applications, pp. 1–49, 2023.[64] J. I. Orlando, H. Fu, J. B. Breda, K. van Keer, D. R. Bathula, A. Diaz-Pinto, R. Fang, P.-A. Heng, J. Kim, J. Lee et al., “Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs,” Medical image analysis, vol. 59, p. 101570, 2020.[65] F. J. F. Batista, T. Diaz-Aleman, J. Sigut, S. Alayon, R. Arnay, and D. Angel-Pereira, “Rim-one dl: A unified retinal image database for assessing glaucoma using deep lear- ning,” Image Analysis & Stereology, vol. 39, no. 3, pp. 161–167, 2020.[66] O. Kovalyk, J. Morales-S´anchez, R. Verdu´-Monedero, I. Sell´es-Navarro, A. Palaz´on- Cabanes, and J.-L. Sancho-G´omez, “Papila: Dataset with fundus images and clinical data of both eyes of the same patient for glaucoma assessment,” Scientific Data, vol. 9, no. 1, pp. 1–12, 2022.[67] G. Bradski and A. Kaehler, “Opencv,” Dr. Dobb’s journal of software tools, vol. 3, p. 120, 2000.[68] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.[69] N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” Ieee Access, vol. 9, pp. 82 031–82 057, 2021.[70] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-first AAAI conference on artificial intelligence, 2017.[71] H. Zhang, M. Tian, G. Shao, J. Cheng, and J. Liu, “Target detection of forward-looking sonar image based on improved yolov5,” IEEE Access, vol. 10, pp. 18 023–18 034, 2022.[1] N. Pardiñas Barón, F. Fernandez Fernández, F. Fondevila Camps, M. Giner Muñoz, and M. Ara Báguena, “Retinopatía de gran altura,” Archivos de la Sociedad Española de Oftalmología, vol. 87, no. 10, pp. 337–339, 2012.[2] M. Pahlitzsch, N. Torun, C. Erb, J. Bruenner, A. K. B. Maier, J. Gonnermann, E. Ber- telmann, and M. K. Klamann, “Significance of the disc damage likelihood scale objecti- vely measured by a non-mydriatic fundus camera in preperimetric glaucoma,” Clinical Ophthalmology (Auckland, NZ), vol. 9, p. 2147, 2015.[3] I. Katsamenis, E. E. Karolou, A. Davradou, E. Protopapadakis, A. Doulamis, N. Dou- lamis, and D. Kalogeras, “Tracon: A novel dataset for real-time traffic cones detection using deep learning,” arXiv preprint arXiv:2205.11830, 2022.[4] J. D. Henderer, “Disc damage likelihood scale,” British Journal of Ophthalmology, vol. 90, no. 4, pp. 395–396, 2006.[5] D. J. Smits, T. Elze, H. Wang, and L. R. Pasquale, “Machine Learning in the Detection of the Glaucomatous Disc and Visual Field,” Seminars in Ophthalmology, vol. 34, no. 4, pp. 232–242, 2019. [Online]. Available: https://doi.org/10.1080/08820538.2019.1620801[6] F. Li, L. Yan, Y. Wang, J. Shi, H. Chen, X. Zhang, M. Jiang, Z. Wu, and K. Zhou, “Deep learning-based automated detection of glaucomatous optic neuropathy on color fundus photographs,” Graefe’s Archive for Clinical and Experimental Ophthalmology, vol. 258, no. 4, pp. 851–867, 2020.[7] S. Borwankar, R. Sen, and B. Kakani, “Improved Glaucoma Diagnosis Using Deep Learning,” Proceedings of CONECCT 2020 - 6th IEEE International Conference on Electronics, Computing and Communication Technologies, pp. 2–5, 2020.[8] M. S. Haleem, L. Han, J. van Hemert, and B. Li, “Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review,” Computerized Medical Imaging and Graphics, vol. 37, no. 7-8, pp. 581–596, 2013. [Online]. Available: http://dx.doi.org/10.1016/j.compmedimag.2013.09.005[9] M. Kim, J. C. Han, S. H. Hyun, O. Janssens, S. Van Hoecke, C. Kee, and W. De Neve, “Medinoid: Computer-aided diagnosis and localization of glaucoma using deep learning,” Applied Sciences (Switzerland), vol. 9, no. 15, 2019.[10] S. K. Devalla, Z. Liang, T. H. Pham, C. Boote, N. G. Strouthidis, A. H. Thiery, and M. J. Girard, “Glaucoma management in the era of artificial intelligence,” British Journal of Ophthalmology, vol. 104, no. 3, pp. 301–311, 2020.[11] S. Sreng, N. Maneerat, K. Hamamoto, and K. Y. Win, “Deep learning for optic disc seg- mentation and glaucoma diagnosis on retinal images,” Applied Sciences (Switzerland), vol. 10, no. 14, 2020.[12] H. Fu, F. Li, Y. Xu, J. Liao, J. Xiong, J. Shen, J. Liu, X. Zhang, C. Yang, F. Lin, H. Luo, H. Li, H. Che, N. Li, and Y. Fan, “A retrospective comparison of deep learning to manual annotations for optic disc and optic cup segmentation in fundus photos,” medRxiv, pp. 1–11, 2020.[13] S. Malik, N. Kanwal, M. N. Asghar, M. A. A. Sadiq, I. Karamat, and M. Fleury, “Data driven approach for eye disease classification with machine learning,” Applied Sciences (Switzerland), vol. 9, no. 14, 2019.[14] A. C. Thompson, A. A. Jammal, and F. A. Medeiros, “A review of deep learning for screening, diagnosis, and detection of glaucoma progression,” Translational Vision Science and Technology, vol. 9, no. 2, pp. 1–19, 2020.[15] H. Liu, L. Li, I. M. Wormstone, C. Qiao, C. Zhang, P. Liu, S. Li, H. Wang, D. Mou, R. Pang, D. Yang, L. M. Zangwill, S. Moghimi, H. Hou, C. Bowd, L. Jiang, Y. Chen, M. Hu, Y. Xu, H. Kang, X. Ji, R. Chang, C. Tham, C. Cheung, D. S. W. Ting, T. Y. Wong, Z. Wang, R. N. Weinreb, M. Xu, and N. Wang, “Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs,” JAMA Ophthalmology, vol. 137, no. 12, pp. 1353–1360, 2019.[16] Z. Li, S. Keel, C. Liu, and M. He, “Can artificial intelligence make screening faster, more accurate, and more accessible?” Asia-Pacific Journal of Ophthalmology, vol. 7, no. 6, pp. 436–441, 2018.[17] R. Kapoor, S. P. Walters, and L. A. Al-Aswad, “The current state of artificial intelligence in ophthalmology,” Survey of Ophthalmology, vol. 64, no. 2, pp. 233–240, 2019. [Online]. Available: https://doi.org/10.1016/j.survophthal.2018.09.002[18] I. J. MacCormick, B. M. Williams, Y. Zheng, K. Li, B. Al-Bander, S. Czanner, R. Cheeseman, C. E. Willoughby, E. N. Brown, G. L. Spaeth, and G. Czanner, “Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile,” PLoS ONE, vol. 14, no. 1, pp. 1–20, 2019. [Online]. Available: http://dx.doi.org/10.1371/journal.pone.0209409[19] Z. Tan, J. Scheetz, and M. He, “Artificial intelligence in ophthalmology: Accuracy, cha- llenges, and clinical application,” Asia-Pacific Journal of Ophthalmology, vol. 8, no. 3, pp. 197–199, 2019.[20] A. R. Ran, C. C. Tham, P. P. Chan, C. Y. Cheng, Y. C. Tham, T. H. Rim, and C. Y. Cheung, “Deep learning in glaucoma with optical coherence tomography: a review,” Eye (Basingstoke), vol. 35, no. 1, pp. 188–201, 2021. [Online]. Available: http://dx.doi.org/10.1038/s41433-020-01191-5[21] F. Abdullah, R. Imtiaz, H. A. Madni, H. A. Khan, T. M. Khan, M. A. Khan, and S. S. Naqvi, “A Review on Glaucoma Disease Detection using Computerized Techniques,” IEEE Access, pp. 37 311–37 333, 2021.[22] M. Alghamdi and M. Abdel-Mottaleb, “A Comparative Study of Deep Learning Models for Diagnosing Glaucoma From Fundus Images,” IEEE Access, vol. 9, pp. 23 894–23 906, 2021.[23] A. M. Stefan, E. A. Paraschiv, S. Ovreiu, and E. Ovreiu, “A review of glaucoma detec- tion from digital fundus images using machine learning techniques,” 2020 8th E-Health and Bioengineering Conference, EHB 2020, pp. 20–23, 2020.[24] D. Mirzania, A. C. Thompson, and K. W. Muir, “Applications of deep learning in detection of glaucoma: A systematic review,” European Journal of Ophthalmology, 2020.InvestigadoresLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/83865/1/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD51ORIGINAL1026591991.2023.pdf1026591991.2023.pdfTesis de Maestría en Ingeniería de Sistemas y Computaciónapplication/pdf29490025https://repositorio.unal.edu.co/bitstream/unal/83865/2/1026591991.2023.pdf75bc82991e123daaf9074988a22d8606MD52THUMBNAIL1026591991.2023.pdf.jpg1026591991.2023.pdf.jpgGenerated Thumbnailimage/jpeg4532https://repositorio.unal.edu.co/bitstream/unal/83865/3/1026591991.2023.pdf.jpg4e4f8f783c27baf41caf049e6c404a87MD53unal/83865oai:repositorio.unal.edu.co:unal/838652024-08-06 23:10:17.885Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo=