Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone
El trabajo de tesis se enfoca en abordar el problema de los cortes de energía en líneas de transmisión eléctrica debido a la invasión de vegetación. Se propone un enfoque basado en el uso de imágenes de drones y técnicas de aprendizaje profundo para anticipar y detectar la presencia de vegetación en...
- Autores:
-
Cano Solis, Mateo
- Tipo de recurso:
- Fecha de publicación:
- 2024
- Institución:
- Universidad Nacional de Colombia
- Repositorio:
- Universidad Nacional de Colombia
- Idioma:
- spa
- OAI Identifier:
- oai:repositorio.unal.edu.co:unal/85423
- Palabra clave:
- 000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación
000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación
Inteligencia artificial
Procesamiento digital de imágenes
Líneas eléctricas
GeoAI
UAV
Vegetation encroachment
Power lines
Deep learning
Semantic segmentation
Artificial intelligence
Machine learning
Drones
Invasión por vegetación
Líneas eléctricas
Aprendizaje profundo
Segmentación
Inteligencia artificial
Aprendizaje automático
Aprendizaje profundo
- Rights
- openAccess
- License
- Atribución-NoComercial-SinDerivadas 4.0 Internacional
id |
UNACIONAL2_e48cda840854e651cec056327ee585f9 |
---|---|
oai_identifier_str |
oai:repositorio.unal.edu.co:unal/85423 |
network_acronym_str |
UNACIONAL2 |
network_name_str |
Universidad Nacional de Colombia |
repository_id_str |
|
dc.title.spa.fl_str_mv |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
dc.title.translated.eng.fl_str_mv |
Segmentation of vegetation encroachment on electrical transmission lines using deep learning on drone images |
title |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
spellingShingle |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone 000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación 000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación Inteligencia artificial Procesamiento digital de imágenes Líneas eléctricas GeoAI UAV Vegetation encroachment Power lines Deep learning Semantic segmentation Artificial intelligence Machine learning Drones Invasión por vegetación Líneas eléctricas Aprendizaje profundo Segmentación Inteligencia artificial Aprendizaje automático Aprendizaje profundo |
title_short |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
title_full |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
title_fullStr |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
title_full_unstemmed |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
title_sort |
Segmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de drone |
dc.creator.fl_str_mv |
Cano Solis, Mateo |
dc.contributor.advisor.none.fl_str_mv |
Ballesteros Parra, John Robert Branch Bedoya, John Willian |
dc.contributor.author.none.fl_str_mv |
Cano Solis, Mateo |
dc.contributor.researchgroup.spa.fl_str_mv |
Gidia: Grupo de Investigación YyDesarrollo en Inteligencia Artificial |
dc.contributor.orcid.spa.fl_str_mv |
https://orcid.org/0000-0001-9988-4624 |
dc.contributor.cvlac.spa.fl_str_mv |
https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0001689779 |
dc.contributor.scopus.spa.fl_str_mv |
58551783500 |
dc.contributor.researchgate.spa.fl_str_mv |
https://www.researchgate.net/profile/Mateo-Cano-Solis |
dc.contributor.googlescholar.spa.fl_str_mv |
https://scholar.google.com/citations?user=OkGRZ30AAAAJ&hl=es |
dc.subject.ddc.spa.fl_str_mv |
000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación 000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación |
topic |
000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación 000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computación Inteligencia artificial Procesamiento digital de imágenes Líneas eléctricas GeoAI UAV Vegetation encroachment Power lines Deep learning Semantic segmentation Artificial intelligence Machine learning Drones Invasión por vegetación Líneas eléctricas Aprendizaje profundo Segmentación Inteligencia artificial Aprendizaje automático Aprendizaje profundo |
dc.subject.lemb.none.fl_str_mv |
Inteligencia artificial Procesamiento digital de imágenes Líneas eléctricas |
dc.subject.proposal.eng.fl_str_mv |
GeoAI UAV Vegetation encroachment Power lines Deep learning Semantic segmentation Artificial intelligence Machine learning |
dc.subject.proposal.spa.fl_str_mv |
Drones Invasión por vegetación Líneas eléctricas Aprendizaje profundo Segmentación Inteligencia artificial Aprendizaje automático |
dc.subject.wikidata.none.fl_str_mv |
Aprendizaje profundo |
description |
El trabajo de tesis se enfoca en abordar el problema de los cortes de energía en líneas de transmisión eléctrica debido a la invasión de vegetación. Se propone un enfoque basado en el uso de imágenes de drones y técnicas de aprendizaje profundo para anticipar y detectar la presencia de vegetación en estas líneas. El objetivo general es desarrollar un flujo de trabajo que permita segmentar áreas invadidas por vegetación, mediante la creación de un conjunto de datos público de imágenes de drones, la preparación y fusión de datos, y la selección de una arquitectura de aprendizaje profundo para la detección. El método propuesto se presenta como una alternativa más eficiente y confiable en comparación con los métodos tradicionales de revisión manual en campo. El enfoque busca proporcionar una herramienta efectiva para la detección temprana de invasión de vegetación, contribuyendo así a mejorar la calidad y confiabilidad del suministro eléctrico y reduciendo los costos asociados a los cortes de energía generados por este problema. (Tomado de la fuente) |
publishDate |
2024 |
dc.date.accessioned.none.fl_str_mv |
2024-01-24T19:52:16Z |
dc.date.available.none.fl_str_mv |
2024-01-24T19:52:16Z |
dc.date.issued.none.fl_str_mv |
2024-01-14 |
dc.type.spa.fl_str_mv |
Trabajo de grado - Maestría |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/masterThesis |
dc.type.version.spa.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.content.spa.fl_str_mv |
Text |
dc.type.redcol.spa.fl_str_mv |
http://purl.org/redcol/resource_type/TM |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://repositorio.unal.edu.co/handle/unal/85423 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.identifier.reponame.spa.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
dc.identifier.repourl.spa.fl_str_mv |
https://repositorio.unal.edu.co/ |
url |
https://repositorio.unal.edu.co/handle/unal/85423 https://repositorio.unal.edu.co/ |
identifier_str_mv |
Universidad Nacional de Colombia Repositorio Institucional Universidad Nacional de Colombia |
dc.language.iso.spa.fl_str_mv |
spa |
language |
spa |
dc.relation.indexed.spa.fl_str_mv |
LaReferencia |
dc.relation.references.spa.fl_str_mv |
V. N. Nguyen, R. Jenssen, and D. Roverso, “Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning,” Int. J. Electr. Power Energy Syst., vol. 99, pp. 107–120, Jul. 2018, doi: 10.1016/j.ijepes.2017.12.016. J. Ahmad, A. S. Malik, L. Xia, and N. Ashikin, “Vegetation encroachment monitoring for transmission lines right-of-ways: A survey,” Electr. Power Syst. Res., vol. 95, pp. 339–352, Feb. 2013, doi: 10.1016/j.epsr.2012.07.015. M. Korki, N. D. Shankar, R. Naymeshbhai Shah, S. M. Waseem, and S. Hodges, “Automatic Fault Detection of Power Lines using Unmanned Aerial Vehicle (UAV),” in 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), Feb. 2019, pp. 1–6. doi: 10.1109/UVS.2019.8658283. Y. Zhang, X. Yuan, W. Li, and S. Chen, “Automatic power line inspection using UAV images,” Remote Sens., vol. 9, no. 8, 2017, doi: 10.3390/rs9080824. D. Li and X. Wang, “The Future Application of Transmission Line Automatic Monitoring and Deep Learning Technology Based on Vision,” in 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), Apr. 2019, pp. 131–137. doi: 10.1109/ICCCBDA.2019.8725702. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015, doi: 10.1038/nature14539. I. Maduako et al., “Deep learning for component fault detection in electricity transmission lines,” J. Big Data, vol. 9, no. 1, 2022, doi: 10.1186/s40537-022-00630-2. B. Zhang, L. Zhao, and X. Zhang, “Three-dimensional convolutional neural network model for tree species classification using airborne hyperspectral images,” Remote Sens. Environ., vol. 247, 2020, doi: 10.1016/j.rse.2020.111938. G. Lin, A. Milan, C. Shen, and I. Reid, “RefineNet: Multi-path Refinement Networks for High-Resolution Semantic Segmentation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 5168–5177. doi: 10.1109/CVPR.2017.549. C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, p. 60, Jul. 2019, doi: 10.1186/s40537-019-0197-0. J. R. Ballesteros, G. Sanchez-Torres, and J. W. Branch-Bedoya, “A GIS Pipeline to Produce GeoAI Datasets from Drone Overhead Imagery,” ISPRS Int. J. Geo-Inf., vol. 11, no. 10, Art. no. 10, Oct. 2022, doi: 10.3390/ijgi11100508. Z. Shang et al., “Democratizing data science through interactive curation of ML pipelines,” presented at the Proceedings of the ACM SIGMOD International Conference on Management of Data, 2019, pp. 1171–1188. doi: 10.1145/3299869.3319863. R. Madaan, D. Maturana, and S. Scherer, “Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2017, pp. 3487–3494. doi: 10.1109/IROS.2017.8206190. Y. Zhang, X. Yuan, Y. Fang, and S. Chen, “UAV low altitude photogrammetry for power line inspection,” ISPRS Int. J. Geo-Inf., vol. 6, no. 1, 2017, doi: 10.3390/ijgi6010014. S. Vemula and M. Frye, “Multi-head Attention Based Transformers for Vegetation Encroachment Over Powerline Corriders using UAV,” in 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), Oct. 2021, pp. 1–5. doi: 10.1109/DASC52595.2021.9594293. A. Vaswani et al., “Attention is all you need,” presented at the Advances in Neural Information Processing Systems, 2017, pp. 5999–6009. Y. Chen, J. Lin, and X. Liao, “Early detection of tree encroachment in high voltage powerline corridor using growth model and UAV-borne LiDAR,” Int. J. Appl. Earth Obs. Geoinformation, vol. 108, p. 102740, Apr. 2022, doi: 10.1016/j.jag.2022.102740. S. Rong, L. He, L. Du, Z. Li, and S. Yu, “Intelligent Detection of Vegetation Encroachment of Power Lines With Advanced Stereovision,” IEEE Trans. Power Deliv., vol. 36, no. 6, pp. 3477–3485, Dec. 2021, doi: 10.1109/TPWRD.2020.3043433. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004, doi: 10.1023/B:VISI.0000029664.99615.94. K. Sikorska-Łukasiewicz, “Methods of automatic vegetation encroachment detection for high voltage power lines,” presented at the Proceedings of SPIE - The International Society for Optical Engineering, 2020. doi: 10.1117/12.2565756. PubMed, “PubMed User Guide,” PubMed. Accessed: Aug. 15, 2023. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/help/ T. Vopham, J. E. Hart, F. Laden, and Y.-Y. Chiang, “Emerging trends in geospatial artificial intelligence (geoAI): Potential applications for environmental epidemiology,” Environ. Health Glob. Access Sci. Source, vol. 17, no. 1, 2018, doi: 10.1186/s12940-018-0386-x. F. M. E. Haroun, S. N. M. Deros, and N. M. Din, “A Review of Vegetation Encroachment Detection in Power Transmission Lines using Optical Sensing Satellite Imagery,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 9, no. 1.4, pp. 618–624, Sep. 2020, doi: 10.30534/ijatcse/2020/8691.42020. M. Gazzea, M. Pacevicius, D. O. Dammann, A. Sapronova, T. M. Lunde, and R. Arghandeh, “Automated Power Lines Vegetation Monitoring Using High-Resolution Satellite Imagery,” IEEE Trans. Power Deliv., vol. 37, no. 1, pp. 308–316, 2022, doi: 10.1109/TPWRD.2021.3059307. F. M. E. Haroun, S. N. M. Deros, and N. M. Din, “Detection and Monitoring of Power Line Corridor from Satellite Imagery Using RetinaNet and K-Mean Clustering,” IEEE Access, 2021, doi: 10.1109/ACCESS.2021.3106550. F. Mahdi Elsiddig Haroun, S. N. Mohamed Deros, M. Z. Bin Baharuddin, and N. Md Din, “Detection of Vegetation Encroachment in Power Transmission Line Corridor from Satellite Imagery Using Support Vector Machine: A Features Analysis Approach,” Energies, vol. 14, no. 12, p. 3393, Jan. 2021, doi: 10.3390/en14123393. J. Ahmad, A. S. Malik, M. F. Abdullah, N. Kamel, and L. Xia, “A novel method for vegetation encroachment monitoring of transmission lines using a single 2D camera,” Pattern Anal. Appl., vol. 18, no. 2, pp. 419–440, May 2015, doi: 10.1007/s10044-014-0391-9. F. Azevedo et al., “LiDAR-based real-time detection and modeling of power lines for unmanned aerial vehicles,” Sens. Switz., vol. 19, no. 8, 2019, doi: 10.3390/s19081812. R. Zhang, B. Yang, W. Xiao, F. Liang, Y. Liu, and Z. Wang, “Automatic extraction of high-voltage power transmission objects from UAV Lidar point clouds,” Remote Sens., vol. 11, no. 22, 2019, doi: 10.3390/rs11222600. A. Qayyum, I. Razzak, A. S. Malik, and S. Anwar, “Fusion of CNN and sparse representation for threat estimation near power lines and poles infrastructure using aerial stereo imagery,” Technol. Forecast. Soc. Change, vol. 168, p. 120762, Jul. 2021, doi: 10.1016/j.techfore.2021.120762. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” presented at the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91. Y. Freund and R. Schapire, “Experiments with a New Boosting Algorithm,” presented at the International Conference on Machine Learning, Jul. 1996. Accessed: Jan. 09, 2024. [Online]. Available: https://cseweb.ucsd.edu/~yfreund/papers/boostingexperiments.pdf M. Aizerman, E. Braverman, and L. Rozonoer, “Theoretical foundation of potential functions method in pattern recognition,” 1964. Accessed: Jan. 09, 2024. [Online]. Available: https://cs.uwaterloo.ca/~y328yu/classics/kernel.pdf T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal Loss for Dense Object Detection.” arXiv, Feb. 07, 2018. doi: 10.48550/arXiv.1708.02002. K. McGarigal, S. Stafford, and S. Cushman, “Cluster Analysis,” in Multivariate Statistics for Wildlife and Ecology Research, K. McGarigal, S. Stafford, and S. Cushman, Eds., New York, NY: Springer, 2000, pp. 81–128. doi: 10.1007/978-1-4612-1288-1_3. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” presented at the Advances in Neural Information Processing Systems, 2015, pp. 91–99. K. Pearson, “LIII. On lines and planes of closest fit to systems of points in space,” Lond. Edinb. Dublin Philos. Mag. J. Sci., vol. 2, no. 11, pp. 559–572, Nov. 1901, doi: 10.1080/14786440109462720. X. Liu, X. Miao, H. Jiang, and J. Chen, “Data analysis in visual power line inspection: An in-depth review of deep learning for component detection and fault diagnosis,” Annu. Rev. Control, vol. 50, pp. 253–277, Jan. 2020, doi: 10.1016/j.arcontrol.2020.09.002. M. Cano-Solis, J. R. Ballesteros, and J. W. Branch-Bedoya, “VEPL Dataset: A Vegetation Encroachment in Power Line Corridors Dataset for Semantic Segmentation of Drone Aerial Orthomosaics,” Data, vol. 8, no. 8, Art. no. 8, Aug. 2023, doi: 10.3390/data8080128. Z. Restrepo, S. Botero, S. González-Caro, C. Ortiz-Yusty, and E. Alvarez-Davila, Guía de flora y fauna del sistema local de área protegidas de Envigado (Colombia). Medellin: Editorial Jardín Botánico de Medellín, 2018. J. R. Ballesteros, G. Sanchez-Torres, and J. W. Branch-Bedoya, “HAGDAVS: Height-Augmented Geo-Located Dataset for Detection and Semantic Segmentation of Vehicles in Drone Aerial Orthomosaics,” Data, vol. 7, no. 4, Art. no. 4, Apr. 2022, doi: 10.3390/data7040050. “Agisoft Metashape: Agisoft Metashape.” Accessed: Apr. 02, 2023. [Online]. Available: https://www.agisoft.com/ A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” presented at the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011, pp. 1521–1528. doi: 10.1109/CVPR.2011.5995347. A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Inf. Switz., vol. 11, no. 2, 2020, doi: 10.3390/info11020125. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” presented at the Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626. doi: 10.1109/ICCV.2017.74. E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation strategies from data,” presented at the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, pp. 113–123. doi: 10.1109/CVPR.2019.00020. I. J. Goodfellow et al., “Generative Adversarial Networks,” arXiv.org. Accessed: Jan. 09, 2024. [Online]. Available: https://arxiv.org/abs/1406.2661v1 A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” presented at the 2018 International Interdisciplinary PhD Workshop, IIPhDW 2018, 2018, pp. 117–122. doi: 10.1109/IIPHDW.2018.8388338. L. Taylor and G. Nitschke, “Improving Deep Learning with Generic Data Augmentation,” presented at the Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018, 2019, pp. 1542–1547. doi: 10.1109/SSCI.2018.8628742. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation.” arXiv, May 18, 2015. doi: 10.48550/arXiv.1505.04597. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.” arXiv, May 11, 2017. Accessed: May 13, 2023. [Online]. Available: http://arxiv.org/abs/1606.00915. F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, “ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data,” ISPRS J. Photogramm. Remote Sens., vol. 162, pp. 94–114, 2020, doi: 10.1016/j.isprsjprs.2020.01.013. C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” Lect. Notes Comput. Sci. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinforma., vol. 10553 LNCS, pp. 240–248, 2017, doi: 10.1007/978-3-319-67558-9_28. D. Peng, Y. Zhang, and H. Guan, “End-to-end change detection for high resolution satellite images using improved UNet++,” Remote Sens., vol. 11, no. 11, 2019, doi: 10.3390/rs11111382. X. Yang, X. Li, Y. Ye, R. Y. K. Lau, X. Zhang, and X. Huang, “Road Detection and Centerline Extraction Via Deep Recurrent Convolutional Neural Network U-Net,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 9, pp. 7209–7220, Sep. 2019, doi: 10.1109/TGRS.2019.2912301. C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A Survey on Deep Transfer Learning,” in Artificial Neural Networks and Machine Learning – ICANN 2018, V. Kůrková, Y. Manolopoulos, B. Hammer, L. Iliadis, and I. Maglogiannis, Eds., in Lecture Notes in Computer Science. Cham: Springer International Publishing, 2018, pp. 270–279. doi: 10.1007/978-3-030-01424-7_27. R. Giorgiani do Nascimento and F. Viana, “Satellite Image Classification and Segmentation with Transfer Learning,” in AIAA Scitech 2020 Forum, in AIAA SciTech Forum. , American Institute of Aeronautics and Astronautics, 2020. doi: 10.2514/6.2020-1864. M. Wurm, T. Stark, X. X. Zhu, M. Weigand, and H. Taubenböck, “Semantic segmentation of slums in satellite images using transfer learning on fully convolutional neural networks,” ISPRS J. Photogramm. Remote Sens., vol. 150, pp. 59–69, 2019, doi: 10.1016/j.isprsjprs.2019.02.006. T. Chen et al., “Pavement crack detection and recognition using the architecture of segNet,” J. Ind. Inf. Integr., vol. 18, 2020, doi: 10.1016/j.jii.2020.100144. P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” J. Field Robot., vol. 37, no. 1, pp. 7–19, 2020, doi: 10.1002/rob.21869. A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, P. Martinez-Gonzalez, and J. Garcia-Rodriguez, “A survey on deep learning techniques for image and video semantic segmentation,” Appl. Soft Comput., vol. 70, pp. 41–65, Sep. 2018, doi: 10.1016/j.asoc.2018.05.018. S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmentation using 3D fully convolutional deep networks.” arXiv, Jun. 18, 2017. doi: 10.48550/arXiv.1706.05721. M. Cano-Solis, J. R. Ballesteros, and J. W. Branch, “VEPL dataset: A Vegetation Encroachment in Power Line Corridors Dataset for Semantic Segmentation in Drone Aerial Orthomosaics”, Accessed: May 10, 2023. [Online]. Available: https://zenodo.org/record/7800234/preview/ORTHOMOSAICS-DSM-MASK.zip M. Cano-Solis, J. R. Ballesteros, and G. Sanchez-Torres, “VEPL-Net: A Deep Learning Ensemble for Automatic Segmentation of Vegetation Encroachment in Power Line Corridors Using UAV Imagery,” ISPRS Int. J. Geo-Inf., vol. 12, no. 11, Art. no. 11, Nov. 2023, doi: 10.3390/ijgi12110454. H. L. Yang, J. Yuan, D. Lunga, M. Laverdiere, A. Rose, and B. Bhaduri, “Building Extraction at Scale Using Convolutional Neural Network: Mapping of the United States,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 11, no. 8, pp. 2600–2614, 2018, doi: 10.1109/JSTARS.2018.2835377. Q. Zhu, C. Liao, H. Hu, X. Mei, and H. Li, “MAP-Net: Multiple Attending Path Neural Network for Building Footprint Extraction from Remote Sensed Imagery,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 7, pp. 6169–6181, 2021, doi: 10.1109/TGRS.2020.3026051. |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.license.spa.fl_str_mv |
Atribución-NoComercial-SinDerivadas 4.0 Internacional |
dc.rights.uri.spa.fl_str_mv |
http://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Atribución-NoComercial-SinDerivadas 4.0 Internacional http://creativecommons.org/licenses/by-nc-nd/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
47 páginas |
dc.format.mimetype.spa.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.publisher.program.spa.fl_str_mv |
Medellín - Minas - Maestría en Ingeniería - Analítica |
dc.publisher.faculty.spa.fl_str_mv |
Facultad de Minas |
dc.publisher.place.spa.fl_str_mv |
Medellín, Colombia |
dc.publisher.branch.spa.fl_str_mv |
Universidad Nacional de Colombia - Sede Medellín |
institution |
Universidad Nacional de Colombia |
bitstream.url.fl_str_mv |
https://repositorio.unal.edu.co/bitstream/unal/85423/3/license.txt https://repositorio.unal.edu.co/bitstream/unal/85423/4/1037660293.2024.pdf https://repositorio.unal.edu.co/bitstream/unal/85423/5/1037660293.2024.pdf.jpg |
bitstream.checksum.fl_str_mv |
eb34b1cf90b7e1103fc9dfd26be24b4a 84cd4db07f36e4975328d9fc31d1ec28 d03470412faa5df8cda0b61c921b14be |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
repository.mail.fl_str_mv |
repositorio_nal@unal.edu.co |
_version_ |
1814089757556211712 |
spelling |
Atribución-NoComercial-SinDerivadas 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Ballesteros Parra, John Robert8b10cddb5010474c877da0b0b9ef76faBranch Bedoya, John Willian112eaa0bbeeaeb0d3d14dfe15d672a15600Cano Solis, Mateof8bc8e8cf253d76847c9e0d93ba4fb35Gidia: Grupo de Investigación YyDesarrollo en Inteligencia Artificialhttps://orcid.org/0000-0001-9988-4624https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=000168977958551783500https://www.researchgate.net/profile/Mateo-Cano-Solishttps://scholar.google.com/citations?user=OkGRZ30AAAAJ&hl=es2024-01-24T19:52:16Z2024-01-24T19:52:16Z2024-01-14https://repositorio.unal.edu.co/handle/unal/85423Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/El trabajo de tesis se enfoca en abordar el problema de los cortes de energía en líneas de transmisión eléctrica debido a la invasión de vegetación. Se propone un enfoque basado en el uso de imágenes de drones y técnicas de aprendizaje profundo para anticipar y detectar la presencia de vegetación en estas líneas. El objetivo general es desarrollar un flujo de trabajo que permita segmentar áreas invadidas por vegetación, mediante la creación de un conjunto de datos público de imágenes de drones, la preparación y fusión de datos, y la selección de una arquitectura de aprendizaje profundo para la detección. El método propuesto se presenta como una alternativa más eficiente y confiable en comparación con los métodos tradicionales de revisión manual en campo. El enfoque busca proporcionar una herramienta efectiva para la detección temprana de invasión de vegetación, contribuyendo así a mejorar la calidad y confiabilidad del suministro eléctrico y reduciendo los costos asociados a los cortes de energía generados por este problema. (Tomado de la fuente)The thesis work focuses on addressing the issue of power outages in electrical transmission lines caused by vegetation encroachment. An approach is proposed that relies on drone imagery and deep learning techniques to anticipate and detect vegetation invasion in these lines. The overall objective is to develop a workflow that allows for the segmentation of vegetation-invaded areas, achieved through the creation of a public dataset of drone images, data preparation and fusion strategies, and the selection of a deep learning architecture for detection. The proposed method is presented as a more efficient and reliable alternative compared to traditional manual field inspection methods. The approach aims to provide an effective tool for early detection of vegetation encroachment, thereby contributing to enhancing the quality and reliability of the electrical power supply and reducing costs associated with power outages caused by this problem.MaestríaMagister en Ingeniería - AnalíticaGEOAIDeep LearningÁrea Curricular de Ingeniería de Sistemas e Informática47 páginasapplication/pdfspaUniversidad Nacional de ColombiaMedellín - Minas - Maestría en Ingeniería - AnalíticaFacultad de MinasMedellín, ColombiaUniversidad Nacional de Colombia - Sede Medellín000 - Ciencias de la computación, información y obras generales::005 - Programación, programas, datos de computación000 - Ciencias de la computación, información y obras generales::006 - Métodos especiales de computaciónInteligencia artificialProcesamiento digital de imágenesLíneas eléctricasGeoAIUAVVegetation encroachmentPower linesDeep learningSemantic segmentationArtificial intelligenceMachine learningDronesInvasión por vegetaciónLíneas eléctricasAprendizaje profundoSegmentaciónInteligencia artificialAprendizaje automáticoAprendizaje profundoSegmentación de la invasión por vegetación a líneas de transmisión eléctrica usando aprendizaje profundo en imágenes de droneSegmentation of vegetation encroachment on electrical transmission lines using deep learning on drone imagesTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMLaReferenciaV. N. Nguyen, R. Jenssen, and D. Roverso, “Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning,” Int. J. Electr. Power Energy Syst., vol. 99, pp. 107–120, Jul. 2018, doi: 10.1016/j.ijepes.2017.12.016.J. Ahmad, A. S. Malik, L. Xia, and N. Ashikin, “Vegetation encroachment monitoring for transmission lines right-of-ways: A survey,” Electr. Power Syst. Res., vol. 95, pp. 339–352, Feb. 2013, doi: 10.1016/j.epsr.2012.07.015.M. Korki, N. D. Shankar, R. Naymeshbhai Shah, S. M. Waseem, and S. Hodges, “Automatic Fault Detection of Power Lines using Unmanned Aerial Vehicle (UAV),” in 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), Feb. 2019, pp. 1–6. doi: 10.1109/UVS.2019.8658283.Y. Zhang, X. Yuan, W. Li, and S. Chen, “Automatic power line inspection using UAV images,” Remote Sens., vol. 9, no. 8, 2017, doi: 10.3390/rs9080824.D. Li and X. Wang, “The Future Application of Transmission Line Automatic Monitoring and Deep Learning Technology Based on Vision,” in 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), Apr. 2019, pp. 131–137. doi: 10.1109/ICCCBDA.2019.8725702.Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015, doi: 10.1038/nature14539.I. Maduako et al., “Deep learning for component fault detection in electricity transmission lines,” J. Big Data, vol. 9, no. 1, 2022, doi: 10.1186/s40537-022-00630-2.B. Zhang, L. Zhao, and X. Zhang, “Three-dimensional convolutional neural network model for tree species classification using airborne hyperspectral images,” Remote Sens. Environ., vol. 247, 2020, doi: 10.1016/j.rse.2020.111938.G. Lin, A. Milan, C. Shen, and I. Reid, “RefineNet: Multi-path Refinement Networks for High-Resolution Semantic Segmentation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 5168–5177. doi: 10.1109/CVPR.2017.549.C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, p. 60, Jul. 2019, doi: 10.1186/s40537-019-0197-0.J. R. Ballesteros, G. Sanchez-Torres, and J. W. Branch-Bedoya, “A GIS Pipeline to Produce GeoAI Datasets from Drone Overhead Imagery,” ISPRS Int. J. Geo-Inf., vol. 11, no. 10, Art. no. 10, Oct. 2022, doi: 10.3390/ijgi11100508.Z. Shang et al., “Democratizing data science through interactive curation of ML pipelines,” presented at the Proceedings of the ACM SIGMOD International Conference on Management of Data, 2019, pp. 1171–1188. doi: 10.1145/3299869.3319863.R. Madaan, D. Maturana, and S. Scherer, “Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep. 2017, pp. 3487–3494. doi: 10.1109/IROS.2017.8206190.Y. Zhang, X. Yuan, Y. Fang, and S. Chen, “UAV low altitude photogrammetry for power line inspection,” ISPRS Int. J. Geo-Inf., vol. 6, no. 1, 2017, doi: 10.3390/ijgi6010014.S. Vemula and M. Frye, “Multi-head Attention Based Transformers for Vegetation Encroachment Over Powerline Corriders using UAV,” in 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), Oct. 2021, pp. 1–5. doi: 10.1109/DASC52595.2021.9594293.A. Vaswani et al., “Attention is all you need,” presented at the Advances in Neural Information Processing Systems, 2017, pp. 5999–6009.Y. Chen, J. Lin, and X. Liao, “Early detection of tree encroachment in high voltage powerline corridor using growth model and UAV-borne LiDAR,” Int. J. Appl. Earth Obs. Geoinformation, vol. 108, p. 102740, Apr. 2022, doi: 10.1016/j.jag.2022.102740.S. Rong, L. He, L. Du, Z. Li, and S. Yu, “Intelligent Detection of Vegetation Encroachment of Power Lines With Advanced Stereovision,” IEEE Trans. Power Deliv., vol. 36, no. 6, pp. 3477–3485, Dec. 2021, doi: 10.1109/TPWRD.2020.3043433.D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004, doi: 10.1023/B:VISI.0000029664.99615.94.K. Sikorska-Łukasiewicz, “Methods of automatic vegetation encroachment detection for high voltage power lines,” presented at the Proceedings of SPIE - The International Society for Optical Engineering, 2020. doi: 10.1117/12.2565756.PubMed, “PubMed User Guide,” PubMed. Accessed: Aug. 15, 2023. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/help/T. Vopham, J. E. Hart, F. Laden, and Y.-Y. Chiang, “Emerging trends in geospatial artificial intelligence (geoAI): Potential applications for environmental epidemiology,” Environ. Health Glob. Access Sci. Source, vol. 17, no. 1, 2018, doi: 10.1186/s12940-018-0386-x.F. M. E. Haroun, S. N. M. Deros, and N. M. Din, “A Review of Vegetation Encroachment Detection in Power Transmission Lines using Optical Sensing Satellite Imagery,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 9, no. 1.4, pp. 618–624, Sep. 2020, doi: 10.30534/ijatcse/2020/8691.42020.M. Gazzea, M. Pacevicius, D. O. Dammann, A. Sapronova, T. M. Lunde, and R. Arghandeh, “Automated Power Lines Vegetation Monitoring Using High-Resolution Satellite Imagery,” IEEE Trans. Power Deliv., vol. 37, no. 1, pp. 308–316, 2022, doi: 10.1109/TPWRD.2021.3059307.F. M. E. Haroun, S. N. M. Deros, and N. M. Din, “Detection and Monitoring of Power Line Corridor from Satellite Imagery Using RetinaNet and K-Mean Clustering,” IEEE Access, 2021, doi: 10.1109/ACCESS.2021.3106550.F. Mahdi Elsiddig Haroun, S. N. Mohamed Deros, M. Z. Bin Baharuddin, and N. Md Din, “Detection of Vegetation Encroachment in Power Transmission Line Corridor from Satellite Imagery Using Support Vector Machine: A Features Analysis Approach,” Energies, vol. 14, no. 12, p. 3393, Jan. 2021, doi: 10.3390/en14123393.J. Ahmad, A. S. Malik, M. F. Abdullah, N. Kamel, and L. Xia, “A novel method for vegetation encroachment monitoring of transmission lines using a single 2D camera,” Pattern Anal. Appl., vol. 18, no. 2, pp. 419–440, May 2015, doi: 10.1007/s10044-014-0391-9.F. Azevedo et al., “LiDAR-based real-time detection and modeling of power lines for unmanned aerial vehicles,” Sens. Switz., vol. 19, no. 8, 2019, doi: 10.3390/s19081812.R. Zhang, B. Yang, W. Xiao, F. Liang, Y. Liu, and Z. Wang, “Automatic extraction of high-voltage power transmission objects from UAV Lidar point clouds,” Remote Sens., vol. 11, no. 22, 2019, doi: 10.3390/rs11222600.A. Qayyum, I. Razzak, A. S. Malik, and S. Anwar, “Fusion of CNN and sparse representation for threat estimation near power lines and poles infrastructure using aerial stereo imagery,” Technol. Forecast. Soc. Change, vol. 168, p. 120762, Jul. 2021, doi: 10.1016/j.techfore.2021.120762.J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” presented at the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.Y. Freund and R. Schapire, “Experiments with a New Boosting Algorithm,” presented at the International Conference on Machine Learning, Jul. 1996. Accessed: Jan. 09, 2024. [Online]. Available: https://cseweb.ucsd.edu/~yfreund/papers/boostingexperiments.pdfM. Aizerman, E. Braverman, and L. Rozonoer, “Theoretical foundation of potential functions method in pattern recognition,” 1964. Accessed: Jan. 09, 2024. [Online]. Available: https://cs.uwaterloo.ca/~y328yu/classics/kernel.pdfT.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal Loss for Dense Object Detection.” arXiv, Feb. 07, 2018. doi: 10.48550/arXiv.1708.02002.K. McGarigal, S. Stafford, and S. Cushman, “Cluster Analysis,” in Multivariate Statistics for Wildlife and Ecology Research, K. McGarigal, S. Stafford, and S. Cushman, Eds., New York, NY: Springer, 2000, pp. 81–128. doi: 10.1007/978-1-4612-1288-1_3.S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” presented at the Advances in Neural Information Processing Systems, 2015, pp. 91–99.K. Pearson, “LIII. On lines and planes of closest fit to systems of points in space,” Lond. Edinb. Dublin Philos. Mag. J. Sci., vol. 2, no. 11, pp. 559–572, Nov. 1901, doi: 10.1080/14786440109462720.X. Liu, X. Miao, H. Jiang, and J. Chen, “Data analysis in visual power line inspection: An in-depth review of deep learning for component detection and fault diagnosis,” Annu. Rev. Control, vol. 50, pp. 253–277, Jan. 2020, doi: 10.1016/j.arcontrol.2020.09.002.M. Cano-Solis, J. R. Ballesteros, and J. W. Branch-Bedoya, “VEPL Dataset: A Vegetation Encroachment in Power Line Corridors Dataset for Semantic Segmentation of Drone Aerial Orthomosaics,” Data, vol. 8, no. 8, Art. no. 8, Aug. 2023, doi: 10.3390/data8080128.Z. Restrepo, S. Botero, S. González-Caro, C. Ortiz-Yusty, and E. Alvarez-Davila, Guía de flora y fauna del sistema local de área protegidas de Envigado (Colombia). Medellin: Editorial Jardín Botánico de Medellín, 2018.J. R. Ballesteros, G. Sanchez-Torres, and J. W. Branch-Bedoya, “HAGDAVS: Height-Augmented Geo-Located Dataset for Detection and Semantic Segmentation of Vehicles in Drone Aerial Orthomosaics,” Data, vol. 7, no. 4, Art. no. 4, Apr. 2022, doi: 10.3390/data7040050.“Agisoft Metashape: Agisoft Metashape.” Accessed: Apr. 02, 2023. [Online]. Available: https://www.agisoft.com/A. Torralba and A. A. Efros, “Unbiased look at dataset bias,” presented at the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011, pp. 1521–1528. doi: 10.1109/CVPR.2011.5995347.A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Inf. Switz., vol. 11, no. 2, 2020, doi: 10.3390/info11020125.R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” presented at the Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626. doi: 10.1109/ICCV.2017.74.E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation strategies from data,” presented at the Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, pp. 113–123. doi: 10.1109/CVPR.2019.00020.I. J. Goodfellow et al., “Generative Adversarial Networks,” arXiv.org. Accessed: Jan. 09, 2024. [Online]. Available: https://arxiv.org/abs/1406.2661v1A. Mikołajczyk and M. Grochowski, “Data augmentation for improving deep learning in image classification problem,” presented at the 2018 International Interdisciplinary PhD Workshop, IIPhDW 2018, 2018, pp. 117–122. doi: 10.1109/IIPHDW.2018.8388338.L. Taylor and G. Nitschke, “Improving Deep Learning with Generic Data Augmentation,” presented at the Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018, 2019, pp. 1542–1547. doi: 10.1109/SSCI.2018.8628742.O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation.” arXiv, May 18, 2015. doi: 10.48550/arXiv.1505.04597.L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.” arXiv, May 11, 2017. Accessed: May 13, 2023. [Online]. Available: http://arxiv.org/abs/1606.00915.F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, “ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data,” ISPRS J. Photogramm. Remote Sens., vol. 162, pp. 94–114, 2020, doi: 10.1016/j.isprsjprs.2020.01.013.C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations,” Lect. Notes Comput. Sci. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinforma., vol. 10553 LNCS, pp. 240–248, 2017, doi: 10.1007/978-3-319-67558-9_28.D. Peng, Y. Zhang, and H. Guan, “End-to-end change detection for high resolution satellite images using improved UNet++,” Remote Sens., vol. 11, no. 11, 2019, doi: 10.3390/rs11111382.X. Yang, X. Li, Y. Ye, R. Y. K. Lau, X. Zhang, and X. Huang, “Road Detection and Centerline Extraction Via Deep Recurrent Convolutional Neural Network U-Net,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 9, pp. 7209–7220, Sep. 2019, doi: 10.1109/TGRS.2019.2912301.C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A Survey on Deep Transfer Learning,” in Artificial Neural Networks and Machine Learning – ICANN 2018, V. Kůrková, Y. Manolopoulos, B. Hammer, L. Iliadis, and I. Maglogiannis, Eds., in Lecture Notes in Computer Science. Cham: Springer International Publishing, 2018, pp. 270–279. doi: 10.1007/978-3-030-01424-7_27.R. Giorgiani do Nascimento and F. Viana, “Satellite Image Classification and Segmentation with Transfer Learning,” in AIAA Scitech 2020 Forum, in AIAA SciTech Forum. , American Institute of Aeronautics and Astronautics, 2020. doi: 10.2514/6.2020-1864.M. Wurm, T. Stark, X. X. Zhu, M. Weigand, and H. Taubenböck, “Semantic segmentation of slums in satellite images using transfer learning on fully convolutional neural networks,” ISPRS J. Photogramm. Remote Sens., vol. 150, pp. 59–69, 2019, doi: 10.1016/j.isprsjprs.2019.02.006.T. Chen et al., “Pavement crack detection and recognition using the architecture of segNet,” J. Ind. Inf. Integr., vol. 18, 2020, doi: 10.1016/j.jii.2020.100144.P. Bosilj, E. Aptoula, T. Duckett, and G. Cielniak, “Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture,” J. Field Robot., vol. 37, no. 1, pp. 7–19, 2020, doi: 10.1002/rob.21869.A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, P. Martinez-Gonzalez, and J. Garcia-Rodriguez, “A survey on deep learning techniques for image and video semantic segmentation,” Appl. Soft Comput., vol. 70, pp. 41–65, Sep. 2018, doi: 10.1016/j.asoc.2018.05.018.S. S. M. Salehi, D. Erdogmus, and A. Gholipour, “Tversky loss function for image segmentation using 3D fully convolutional deep networks.” arXiv, Jun. 18, 2017. doi: 10.48550/arXiv.1706.05721.M. Cano-Solis, J. R. Ballesteros, and J. W. Branch, “VEPL dataset: A Vegetation Encroachment in Power Line Corridors Dataset for Semantic Segmentation in Drone Aerial Orthomosaics”, Accessed: May 10, 2023. [Online]. Available: https://zenodo.org/record/7800234/preview/ORTHOMOSAICS-DSM-MASK.zipM. Cano-Solis, J. R. Ballesteros, and G. Sanchez-Torres, “VEPL-Net: A Deep Learning Ensemble for Automatic Segmentation of Vegetation Encroachment in Power Line Corridors Using UAV Imagery,” ISPRS Int. J. Geo-Inf., vol. 12, no. 11, Art. no. 11, Nov. 2023, doi: 10.3390/ijgi12110454.H. L. Yang, J. Yuan, D. Lunga, M. Laverdiere, A. Rose, and B. Bhaduri, “Building Extraction at Scale Using Convolutional Neural Network: Mapping of the United States,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 11, no. 8, pp. 2600–2614, 2018, doi: 10.1109/JSTARS.2018.2835377.Q. Zhu, C. Liao, H. Hu, X. Mei, and H. Li, “MAP-Net: Multiple Attending Path Neural Network for Building Footprint Extraction from Remote Sensed Imagery,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 7, pp. 6169–6181, 2021, doi: 10.1109/TGRS.2020.3026051.EstudiantesInvestigadoresMedios de comunicaciónPúblico generalReceptores de fondos federales y solicitantesLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/85423/3/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD53ORIGINAL1037660293.2024.pdf1037660293.2024.pdfTesis de Maestría en Ingeniería - Analíticaapplication/pdf1274413https://repositorio.unal.edu.co/bitstream/unal/85423/4/1037660293.2024.pdf84cd4db07f36e4975328d9fc31d1ec28MD54THUMBNAIL1037660293.2024.pdf.jpg1037660293.2024.pdf.jpgGenerated Thumbnailimage/jpeg3506https://repositorio.unal.edu.co/bitstream/unal/85423/5/1037660293.2024.pdf.jpgd03470412faa5df8cda0b61c921b14beMD55unal/85423oai:repositorio.unal.edu.co:unal/854232024-08-21 23:13:04.827Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo= |