Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia

En el presente estudio se propone analizar un dispositivo que permita estimular los frutos maduros discriminando el movimiento de los frutos verdes en frecuencias muy específicas de movimiento. La tecnología del dispositivo se propone sobre la base de un dispositivo acústico, con una técnica que per...

Full description

Autores:
Yarce Herrera, Jeison Ivan
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2022
Institución:
Universidad Autónoma de Bucaramanga - UNAB
Repositorio:
Repositorio UNAB
Idioma:
spa
OAI Identifier:
oai:repository.unab.edu.co:20.500.12749/19201
Acceso en línea:
http://hdl.handle.net/20.500.12749/19201
Palabra clave:
Mechatronic
Coffee harvest
Arabica coffee
Ripe fruits
Deep learning
Artificial intelligence
Technological innovations
Agricultural innovations
Process development
Algorithms
Mecatrónica
Inteligencia artificial
Innovaciones tecnológicas
Innovaciones agrícolas
Desarrollo de procesos
Algoritmos
Cosecha de café
Café arábica
Frutos maduros
Aprendizaje profundo
Rights
License
http://creativecommons.org/licenses/by-nc-nd/2.5/co/
id UNAB2_383537d4f9b21a22d539228a8b91c9c6
oai_identifier_str oai:repository.unab.edu.co:20.500.12749/19201
network_acronym_str UNAB2
network_name_str Repositorio UNAB
repository_id_str
dc.title.spa.fl_str_mv Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
dc.title.translated.spa.fl_str_mv Development of a study for the implementation of selective harvesting of arabica coffee by applying high frequency vibrations
title Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
spellingShingle Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
Mechatronic
Coffee harvest
Arabica coffee
Ripe fruits
Deep learning
Artificial intelligence
Technological innovations
Agricultural innovations
Process development
Algorithms
Mecatrónica
Inteligencia artificial
Innovaciones tecnológicas
Innovaciones agrícolas
Desarrollo de procesos
Algoritmos
Cosecha de café
Café arábica
Frutos maduros
Aprendizaje profundo
title_short Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
title_full Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
title_fullStr Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
title_full_unstemmed Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
title_sort Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuencia
dc.creator.fl_str_mv Yarce Herrera, Jeison Ivan
dc.contributor.advisor.none.fl_str_mv Arizmendi Pereira, Carlos Julio
dc.contributor.author.none.fl_str_mv Yarce Herrera, Jeison Ivan
dc.contributor.cvlac.spa.fl_str_mv Arizmendi Pereira, Carlos Julio [0001381550]
dc.contributor.googlescholar.spa.fl_str_mv Arizmendi Pereira, Carlos Julio [JgT_je0AAAAJ]
dc.contributor.orcid.spa.fl_str_mv Arizmendi Pereira, Carlos Julio
dc.contributor.scopus.spa.fl_str_mv Arizmendi Pereira, Carlos Julio [16174088500]
dc.contributor.researchgate.spa.fl_str_mv Arizmendi Pereira, Carlos Julio [Carlos_Arizmendi2]
dc.contributor.apolounab.spa.fl_str_mv Arizmendi Pereira, Carlos Julio [carlos-julio-arizmendi-pereira]
dc.subject.keywords.spa.fl_str_mv Mechatronic
Coffee harvest
Arabica coffee
Ripe fruits
Deep learning
Artificial intelligence
Technological innovations
Agricultural innovations
Process development
Algorithms
topic Mechatronic
Coffee harvest
Arabica coffee
Ripe fruits
Deep learning
Artificial intelligence
Technological innovations
Agricultural innovations
Process development
Algorithms
Mecatrónica
Inteligencia artificial
Innovaciones tecnológicas
Innovaciones agrícolas
Desarrollo de procesos
Algoritmos
Cosecha de café
Café arábica
Frutos maduros
Aprendizaje profundo
dc.subject.lemb.spa.fl_str_mv Mecatrónica
Inteligencia artificial
Innovaciones tecnológicas
Innovaciones agrícolas
Desarrollo de procesos
Algoritmos
dc.subject.proposal.spa.fl_str_mv Cosecha de café
Café arábica
Frutos maduros
Aprendizaje profundo
description En el presente estudio se propone analizar un dispositivo que permita estimular los frutos maduros discriminando el movimiento de los frutos verdes en frecuencias muy específicas de movimiento. La tecnología del dispositivo se propone sobre la base de un dispositivo acústico, con una técnica que permita focalizar la energía mediante arreglos de ondas armónica que deben ser controladas para excitar solo frutos maduros. Por lo tanto, una oportunidad es ampliamente observada en el estudio de la subestructura fruto pedúnculo para determinar en los diferentes estados de maduración índices dinámicos que favorezcan el desprendimiento de frutos maduros.
publishDate 2022
dc.date.issued.none.fl_str_mv 2022
dc.date.accessioned.none.fl_str_mv 2023-03-07T19:12:15Z
dc.date.available.none.fl_str_mv 2023-03-07T19:12:15Z
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.local.spa.fl_str_mv Trabajo de Grado
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.hasversion.none.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/TP
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/20.500.12749/19201
dc.identifier.instname.spa.fl_str_mv instname:Universidad Autónoma de Bucaramanga - UNAB
dc.identifier.reponame.spa.fl_str_mv reponame:Repositorio Institucional UNAB
dc.identifier.repourl.spa.fl_str_mv repourl:https://repository.unab.edu.co
url http://hdl.handle.net/20.500.12749/19201
identifier_str_mv instname:Universidad Autónoma de Bucaramanga - UNAB
reponame:Repositorio Institucional UNAB
repourl:https://repository.unab.edu.co
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.references.spa.fl_str_mv [1] L. Figueiredo, I. Jesus, J. A. T. Machado, J. R. Ferreira, and J. L. Martins de Carvalho, “Towards the development of intelligent transportation systems,” in ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), 2001, pp. 1206– 1211.
[2] F. Torres, G. Barros, and M. J. Barros, “Computer vision classifier and platform for automatic counting: More than cars,” 2017 IEEE 2nd Ecuador Tech. Chapters Meet. ETCM 2017, pp. 1–6, 2017.
[3] E. Soroush, A. Mirzaei, and S. Kamkar, “Near Real-Time Vehicle Detection and Tracking in Highways,” no. March 2017, p. 11, 2016
[4] A.-O. Fulop and L. Tamas, “Lessons learned from lightweight CNN based object recognition for mobile robots,” 2018.
[5] P. Abrahamsson et al., “Affordable and energy-efficient cloud computing clusters: The Bolzano Raspberry Pi cloud cluster experiment,” Proc. Int. Conf. Cloud Comput. Technol. Sci. CloudCom, vol. 2, pp. 170–175, 2013
[6] D. Pena, A. Forembski, X. Xu, and D. Moloney, “Benchmarking of CNNs for Low-Cost, Low-Power Robotics Applications,” 2017.
[7] J. Huang et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 3296–3305, 2017.
[8] Ministerio del Trabajo, “Incremento del Salario Básico Unificado 2019.,” 2018. [Online]. Available: http://www.trabajo.gob.ec/incremento-del-salario-basico-unificado-2019/. [Accessed: 10-Sep-2019]
[9] Z. Chen, T. Ellis, and S. A. Velastin, “Vehicle detection, tracking and classification in urban traffic,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 951–956, 2012.
[10] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016
[11] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. - 30th IEEE Con
[12] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018
[13] W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.
[14] R. Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440– 1448, 2015.
[15] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017.
[16] M. J. Shaifee, B. Chywl, F. Li, and A. Wong, “Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video,” J. Comput. Vis. Imaging Syst., vol. 3, no. 1, 2017
[17] L. Zhang, L. Lin, X. Liang, and K. He, “Is faster R-CNN doing well for pedestrian detection?,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9906 LNCS, pp. 443–457, 2016
[18] Y. Jia et al., “Caffe: Convolutional architecture for fast feature embedding,” MM 2014 - Proc. 2014 ACM Conf. Multimed., vol. abs/1506.0, pp. 675–678, 2014
[19] P. B. Martín Abadi et al., “TensorFlow: A system for large-scale machine learning,” Methods Enzymol., 2016.
[20] Y. LeCun et al., “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Comput., vol. 1, no. 4, pp. 541–551, 1989.
[21] Chuanqi305, “MobileNet-SSD,” 2019. [Online]. Available: https://github.com/chuanqi305/MobileNet-SSD. [Accessed: 10-Sep- 2019].
[22] A. Rosebrock, “Real-time object detection on the Raspberry Pi with the Movidius NCS,” 2018. [Online]. Available: https://www.pyimagesearch.com/2018/02/19/real-time-object detection-on-the-raspberry-pi-with-the-movidius-ncs/. [Accessed: 10-Sep-2019
[23] Taehoonlee, “High level network definitions with pre-trained weights in TensorFlow,” 2019. [Online]. Available: https://github.com/taehoonlee/tensornets. [Accessed: 10-Sep 2019].
[24] K. Hyodo, “YoloV3,” 2019. [Online]. Available: https://github.com/PINTO0309. [Accessed: 10-Sep-2019].
[25] J. Redmon and A. Farhadi, “Tiny YOLOv3,” 2018. [Online]. Available: https://pjreddie.com/darknet/yolo/. [Accessed: 10-Sep- 2019].
[26] J. Hui, “SSD object detection: Single Shot MultiBox Detector for real-time processing,” 2018. [Online]. Available: https://medium.com/@jonathan_hui/ssd-object-detection single-shot-multibox-detector-for-real-time-processing- 9bd8deac0e06. [Accessed: 10-Sep 2019].
[27] CyberAILab, “A Closer Look at YOLOv3,” 2018. [Online]. Available: https://www.cyberailab.com/home/a-closer-look-at-yolov3. [Accessed: 10-Sep-2019].
[28] T. Y. Lin et al., “Microsoft COCO: Common objects in context,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014
[29] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010
[30] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012),” 2012. [Online]. Available: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/. [Accessed: 10-Sep-2019].
[31] A. Rosebrock, “Simple object tracking with OpenCV,” 2018. [Online]. Available: https://www.pyimagesearch.com/2018/07/23/simple object-tracking-with-opencv/. [Accessed: 10-Sep-2019].
[32] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” BMVC 2014 - Proc. Br. Mach. Vis. Conf. 2014, 2014
[34] Intel, “Deep Learning For Computer Vision,” 2019. [Online]. Available: https://software.intel.com/en-us/openvino-toolkit/deep-
[34] Intel, “Deep Learning For Computer Vision,” 2019. [Online]. Available: https://software.intel.com/en-us/openvino-toolkit/deep- 88 learning-cv. [Accessed: 10-Oct-2019].
[35] Intel, “Intel® Neural Compute Stick 2,” 2019. [Online]. Available: https://software.intel.com/en-us/neural-compute-stick. [Accessed: 10-Oct-2019].
[36] Raspberry Pi, “Raspberry Pi 3 Model B+,” 2018. [Online]. Available: https://www.raspberrypi.org/products/raspberry-pi-3-model-b- plus/. [Accessed: 10-Oct 2019].
[37] Python, “About Python,” 2019. [Online]. Available: https://www.python.org/about/. [Accessed: 10-Oct-2019].
[38] A. Rosebrock, “Faster video file FPS with cv2.VideoCapture and OpenCV,” 2017. [Online]. Available: https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2- videocapture-and-opencv/. [Accessed: 10-Oct-2019].
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/2.5/co/
dc.rights.local.spa.fl_str_mv Abierto (Texto Completo)
dc.rights.creativecommons.*.fl_str_mv Atribución-NoComercial-SinDerivadas 2.5 Colombia
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-nd/2.5/co/
Abierto (Texto Completo)
Atribución-NoComercial-SinDerivadas 2.5 Colombia
http://purl.org/coar/access_right/c_abf2
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.coverage.spatial.spa.fl_str_mv Colombia
dc.coverage.campus.spa.fl_str_mv UNAB Campus Bucaramanga
dc.publisher.grantor.spa.fl_str_mv Universidad Autónoma de Bucaramanga UNAB
dc.publisher.faculty.spa.fl_str_mv Facultad Ingeniería
dc.publisher.program.spa.fl_str_mv Pregrado Ingeniería Mecatrónica
institution Universidad Autónoma de Bucaramanga - UNAB
bitstream.url.fl_str_mv https://repository.unab.edu.co/bitstream/20.500.12749/19201/1/2022_Tesis_Jeison_Yarce.pdf
https://repository.unab.edu.co/bitstream/20.500.12749/19201/2/2022_Licencia_Jeison_Yarce.pdf
https://repository.unab.edu.co/bitstream/20.500.12749/19201/3/license.txt
https://repository.unab.edu.co/bitstream/20.500.12749/19201/4/2022_Tesis_Jeison_Yarce.pdf.jpg
https://repository.unab.edu.co/bitstream/20.500.12749/19201/5/2022_Licencia_Jeison_Yarce.pdf.jpg
bitstream.checksum.fl_str_mv 41fd1d507f3a662682350ebf4b8fb1fb
034497932c4879bfbe7398cddb2efe9c
3755c0cfdb77e29f2b9125d7a45dd316
ce5ce3bbfb6b6a133eb42ab53cef4bd2
15244c3e44c8562e351667a3b91fffad
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional | Universidad Autónoma de Bucaramanga - UNAB
repository.mail.fl_str_mv repositorio@unab.edu.co
_version_ 1828219737458343936
spelling Arizmendi Pereira, Carlos Julio79e0125f-b191-4144-999b-281177ddaaf9Yarce Herrera, Jeison Ivan55d44cc8-933d-4235-a5b9-cda87faf4e7dArizmendi Pereira, Carlos Julio [0001381550]Arizmendi Pereira, Carlos Julio [JgT_je0AAAAJ]Arizmendi Pereira, Carlos JulioArizmendi Pereira, Carlos Julio [16174088500]Arizmendi Pereira, Carlos Julio [Carlos_Arizmendi2]Arizmendi Pereira, Carlos Julio [carlos-julio-arizmendi-pereira]ColombiaUNAB Campus Bucaramanga2023-03-07T19:12:15Z2023-03-07T19:12:15Z2022http://hdl.handle.net/20.500.12749/19201instname:Universidad Autónoma de Bucaramanga - UNABreponame:Repositorio Institucional UNABrepourl:https://repository.unab.edu.coEn el presente estudio se propone analizar un dispositivo que permita estimular los frutos maduros discriminando el movimiento de los frutos verdes en frecuencias muy específicas de movimiento. La tecnología del dispositivo se propone sobre la base de un dispositivo acústico, con una técnica que permita focalizar la energía mediante arreglos de ondas armónica que deben ser controladas para excitar solo frutos maduros. Por lo tanto, una oportunidad es ampliamente observada en el estudio de la subestructura fruto pedúnculo para determinar en los diferentes estados de maduración índices dinámicos que favorezcan el desprendimiento de frutos maduros.1 Introducción ............................................................................................................................1 1.1 Motivación ..........................................................................................................................1 1.2 Objetivos .............................................................................................................................3 2 Estado del arte.........................................................................................................................5 2.1 Introducción ........................................................................................................................5 2.2 Estado del arte en detección de objetos...............................................................................6 2.2.1 R-CNN ......................................................................................................................6 2.2.2 Fast R-CNN...............................................................................................................7 2.2.3 Faster R-CNN............................................................................................................9 2.2.4 YOLO......................................................................................................................10 2.3 Estado del arte en cosecha selectiva..................................................................................11 2.3.1 Vibraciones mecánicas............................................................................................11 2.3.2 Técnicas visuales.....................................................................................................12 2.3.3 Propiedades de la fruta del café...............................................................................13 2.3.4 Análisis armónico....................................................................................................15 2.4 Principales tecnologías utilizadas......................................................................................16 2.4.1 Python......................................................................................................................16 2.4.2 Pytorch ....................................................................................................................16 2.4.3 OpenCV...................................................................................................................17 2.4.4 Pandas......................................................................................................................18 2.4.5 Numpy.....................................................................................................................19 3 Redes neuronales artificiales.................................................................................................20 3.1 Introducción ......................................................................................................................20 3.2 La neurona biológica.........................................................................................................20 3.3 La neurona artificial ..........................................................................................................21 3.4 Estructura de las redes neuronales.....................................................................................23 3.4.1 Redes de tipo feed-forward .....................................................................................23 3.4.2 Redes de tipo recurrente ..........................................................................................25 3.4.3 Redes de tipo residual..............................................................................................26 3.5 Tipos de aprendizaje en las redes neuronales....................................................................27 3.5.1 Aprendizaje supervisado .........................................................................................28 3.5.2 Aprendizaje no supervisado ....................................................................................29 3.5.3 Aprendizaje semi-supervisado o hibrido.................................................................29 3.5.4 Aprendizaje por refuerzo.........................................................................................30 3.6 Métodos de aprendizaje.....................................................................................................30 3.6.1 Función de coste......................................................................................................31 3.6.2 Descenso del gradiente............................................................................................32 3.6.3 Descenso estocástico de gradiente ..........................................................................34 3.6.4 Propagación hacia atrás...........................................................................................35 3.7 Medidas de prevención de sobreajuste ..............................................................................36 3.8 Redes neuronales convolucionales....................................................................................38 3.8.1 Capa de convolución ...............................................................................................39 3.8.2 Capa de pooling.......................................................................................................41 3.8.3 Capa softmax...........................................................................................................42 4 Detector de estados de maduración.......................................................................................43 4.1 Introducción ......................................................................................................................43 4.1.1 Funcionamiento general del sistema .......................................................................43 4.2 Módulo de detección y clasificación .................................................................................44 5 Configuración sistema acústico ............................................................................................47 5.1 Introducción ......................................................................................................................47 5.2 Dispositivos.......................................................................................................................48 5.2.1 TURBOSOUND IQ15 CABINA ACTIVA 15" TURBOSOUND............................48 5.2.2 micrófono de medición Behringer ecm8000..............................................................48 5.2.3 Interfaz De Audio Usb Presonus Studio 24c .............................................................48 5.2.4 sensor piezo eléctrico.................................................................................................48 5.2.5 sonómetro uni-t ut353................................................................................................49 5.3 Etapa de calibración ..........................................................................................................49 5.4 Diseño de soporte ..............................................................................................................50 5.4.1 Diseño de soporte.......................................................................................................50 5.4.2 Manufactura soporte ..................................................................................................51 5.5 Montaje..............................................................................................................................51 5.6 Simulaciones.....................................................................................................................52 6 Resultados.............................................................................................................................55 6.1 Introducción ......................................................................................................................5PregradoIn the present study it is proposed to analyze a device that allows the stimulation of ripe fruits by discriminating the movement of green fruits in very specific frequencies of movement. The device technology is proposed on the basis of an acoustic device, with a technique that allows energy to be focused through harmonic wave arrangements that must be controlled to excite only ripe fruits. Therefore, an opportunity is widely observed in the study of the fruit peduncle substructure to determine in the different stages of maturation dynamic indices that favor the detachment of ripe fruits.Modalidad Presencialapplication/pdfspahttp://creativecommons.org/licenses/by-nc-nd/2.5/co/Abierto (Texto Completo)Atribución-NoComercial-SinDerivadas 2.5 Colombiahttp://purl.org/coar/access_right/c_abf2Desarrollo de un estudio para la implementación de cosecha selectiva de café arábica aplicando vibraciones de alta frecuenciaDevelopment of a study for the implementation of selective harvesting of arabica coffee by applying high frequency vibrationsIngeniero MecatrónicoUniversidad Autónoma de Bucaramanga UNABFacultad IngenieríaPregrado Ingeniería Mecatrónicainfo:eu-repo/semantics/bachelorThesisTrabajo de Gradohttp://purl.org/coar/resource_type/c_7a1finfo:eu-repo/semantics/acceptedVersionhttp://purl.org/redcol/resource_type/TPMechatronicCoffee harvestArabica coffeeRipe fruitsDeep learningArtificial intelligenceTechnological innovationsAgricultural innovationsProcess developmentAlgorithmsMecatrónicaInteligencia artificialInnovaciones tecnológicasInnovaciones agrícolasDesarrollo de procesosAlgoritmosCosecha de caféCafé arábicaFrutos madurosAprendizaje profundo[1] L. Figueiredo, I. Jesus, J. A. T. Machado, J. R. Ferreira, and J. L. Martins de Carvalho, “Towards the development of intelligent transportation systems,” in ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), 2001, pp. 1206– 1211.[2] F. Torres, G. Barros, and M. J. Barros, “Computer vision classifier and platform for automatic counting: More than cars,” 2017 IEEE 2nd Ecuador Tech. Chapters Meet. ETCM 2017, pp. 1–6, 2017.[3] E. Soroush, A. Mirzaei, and S. Kamkar, “Near Real-Time Vehicle Detection and Tracking in Highways,” no. March 2017, p. 11, 2016[4] A.-O. Fulop and L. Tamas, “Lessons learned from lightweight CNN based object recognition for mobile robots,” 2018.[5] P. Abrahamsson et al., “Affordable and energy-efficient cloud computing clusters: The Bolzano Raspberry Pi cloud cluster experiment,” Proc. Int. Conf. Cloud Comput. Technol. Sci. CloudCom, vol. 2, pp. 170–175, 2013[6] D. Pena, A. Forembski, X. Xu, and D. Moloney, “Benchmarking of CNNs for Low-Cost, Low-Power Robotics Applications,” 2017.[7] J. Huang et al., “Speed/accuracy trade-offs for modern convolutional object detectors,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 3296–3305, 2017.[8] Ministerio del Trabajo, “Incremento del Salario Básico Unificado 2019.,” 2018. [Online]. Available: http://www.trabajo.gob.ec/incremento-del-salario-basico-unificado-2019/. [Accessed: 10-Sep-2019][9] Z. Chen, T. Ellis, and S. A. Velastin, “Vehicle detection, tracking and classification in urban traffic,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, pp. 951–956, 2012.[10] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016[11] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” Proc. - 30th IEEE Con[12] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018[13] W. Liu et al., “SSD: Single shot multibox detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.[14] R. Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440– 1448, 2015.[15] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017.[16] M. J. Shaifee, B. Chywl, F. Li, and A. Wong, “Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video,” J. Comput. Vis. Imaging Syst., vol. 3, no. 1, 2017[17] L. Zhang, L. Lin, X. Liang, and K. He, “Is faster R-CNN doing well for pedestrian detection?,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9906 LNCS, pp. 443–457, 2016[18] Y. Jia et al., “Caffe: Convolutional architecture for fast feature embedding,” MM 2014 - Proc. 2014 ACM Conf. Multimed., vol. abs/1506.0, pp. 675–678, 2014[19] P. B. Martín Abadi et al., “TensorFlow: A system for large-scale machine learning,” Methods Enzymol., 2016.[20] Y. LeCun et al., “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Comput., vol. 1, no. 4, pp. 541–551, 1989.[21] Chuanqi305, “MobileNet-SSD,” 2019. [Online]. Available: https://github.com/chuanqi305/MobileNet-SSD. [Accessed: 10-Sep- 2019].[22] A. Rosebrock, “Real-time object detection on the Raspberry Pi with the Movidius NCS,” 2018. [Online]. Available: https://www.pyimagesearch.com/2018/02/19/real-time-object detection-on-the-raspberry-pi-with-the-movidius-ncs/. [Accessed: 10-Sep-2019[23] Taehoonlee, “High level network definitions with pre-trained weights in TensorFlow,” 2019. [Online]. Available: https://github.com/taehoonlee/tensornets. [Accessed: 10-Sep 2019].[24] K. Hyodo, “YoloV3,” 2019. [Online]. Available: https://github.com/PINTO0309. [Accessed: 10-Sep-2019].[25] J. Redmon and A. Farhadi, “Tiny YOLOv3,” 2018. [Online]. Available: https://pjreddie.com/darknet/yolo/. [Accessed: 10-Sep- 2019].[26] J. Hui, “SSD object detection: Single Shot MultiBox Detector for real-time processing,” 2018. [Online]. Available: https://medium.com/@jonathan_hui/ssd-object-detection single-shot-multibox-detector-for-real-time-processing- 9bd8deac0e06. [Accessed: 10-Sep 2019].[27] CyberAILab, “A Closer Look at YOLOv3,” 2018. [Online]. Available: https://www.cyberailab.com/home/a-closer-look-at-yolov3. [Accessed: 10-Sep-2019].[28] T. Y. Lin et al., “Microsoft COCO: Common objects in context,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014[29] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010[30] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes Challenge 2012 (VOC2012),” 2012. [Online]. Available: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/. [Accessed: 10-Sep-2019].[31] A. Rosebrock, “Simple object tracking with OpenCV,” 2018. [Online]. Available: https://www.pyimagesearch.com/2018/07/23/simple object-tracking-with-opencv/. [Accessed: 10-Sep-2019].[32] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” BMVC 2014 - Proc. Br. Mach. Vis. Conf. 2014, 2014[34] Intel, “Deep Learning For Computer Vision,” 2019. [Online]. Available: https://software.intel.com/en-us/openvino-toolkit/deep-[34] Intel, “Deep Learning For Computer Vision,” 2019. [Online]. Available: https://software.intel.com/en-us/openvino-toolkit/deep- 88 learning-cv. [Accessed: 10-Oct-2019].[35] Intel, “Intel® Neural Compute Stick 2,” 2019. [Online]. Available: https://software.intel.com/en-us/neural-compute-stick. [Accessed: 10-Oct-2019].[36] Raspberry Pi, “Raspberry Pi 3 Model B+,” 2018. [Online]. Available: https://www.raspberrypi.org/products/raspberry-pi-3-model-b- plus/. [Accessed: 10-Oct 2019].[37] Python, “About Python,” 2019. [Online]. Available: https://www.python.org/about/. [Accessed: 10-Oct-2019].[38] A. Rosebrock, “Faster video file FPS with cv2.VideoCapture and OpenCV,” 2017. [Online]. Available: https://www.pyimagesearch.com/2017/02/06/faster-video-file-fps-with-cv2- videocapture-and-opencv/. [Accessed: 10-Oct-2019].ORIGINAL2022_Tesis_Jeison_Yarce.pdf2022_Tesis_Jeison_Yarce.pdfTesisapplication/pdf3593028https://repository.unab.edu.co/bitstream/20.500.12749/19201/1/2022_Tesis_Jeison_Yarce.pdf41fd1d507f3a662682350ebf4b8fb1fbMD51open access2022_Licencia_Jeison_Yarce.pdf2022_Licencia_Jeison_Yarce.pdfLicenciaapplication/pdf635219https://repository.unab.edu.co/bitstream/20.500.12749/19201/2/2022_Licencia_Jeison_Yarce.pdf034497932c4879bfbe7398cddb2efe9cMD52metadata only accessLICENSElicense.txtlicense.txttext/plain; charset=utf-8829https://repository.unab.edu.co/bitstream/20.500.12749/19201/3/license.txt3755c0cfdb77e29f2b9125d7a45dd316MD53open accessTHUMBNAIL2022_Tesis_Jeison_Yarce.pdf.jpg2022_Tesis_Jeison_Yarce.pdf.jpgIM Thumbnailimage/jpeg7294https://repository.unab.edu.co/bitstream/20.500.12749/19201/4/2022_Tesis_Jeison_Yarce.pdf.jpgce5ce3bbfb6b6a133eb42ab53cef4bd2MD54open access2022_Licencia_Jeison_Yarce.pdf.jpg2022_Licencia_Jeison_Yarce.pdf.jpgIM Thumbnailimage/jpeg10188https://repository.unab.edu.co/bitstream/20.500.12749/19201/5/2022_Licencia_Jeison_Yarce.pdf.jpg15244c3e44c8562e351667a3b91fffadMD55metadata only access20.500.12749/19201oai:repository.unab.edu.co:20.500.12749/192012023-03-15 09:28:15.467open accessRepositorio Institucional | Universidad Autónoma de Bucaramanga - UNABrepositorio@unab.edu.coRUwoTE9TKSBBVVRPUihFUyksIG1hbmlmaWVzdGEobWFuaWZlc3RhbW9zKSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuCgpFbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbi4gRWwgQVVUT1IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVTkFCIGFjdMO6YSBjb21vIHVuIHRlcmNlcm8gZGUgYnVlbmEgZmUuCgpFbCBBVVRPUiBhdXRvcml6YSBhIGxhIFVuaXZlcnNpZGFkIEF1dMOzbm9tYSBkZSBCdWNhcmFtYW5nYSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGxhIG9icmEgb2JqZXRvIGRlIGxhIHByZXNlbnRlIGF1dG9yaXphY2nDs24uCg==