Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras
This project aims to train deep neural networks to recognize around 20 fruits and vegetables using a camera attached to portable devices (Smartphones and embedded systems). We built an acquisition system to gather pictures of different kinds of fruits and vegetables and collected further images from...
- Autores:
-
Muñoz Bocanegra, Ricardo
- Tipo de recurso:
- Trabajo de grado de pregrado
- Fecha de publicación:
- 2019
- Institución:
- Universidad Autónoma de Occidente
- Repositorio:
- RED: Repositorio Educativo Digital UAO
- Idioma:
- spa
- OAI Identifier:
- oai:red.uao.edu.co:10614/11083
- Acceso en línea:
- http://hdl.handle.net/10614/11083
- Palabra clave:
- Ingeniería Mecatrónica
Redes neurales (Computadores)
Sistemas de computador embebidos
Aplicaciones móviles
Procesamiento de imágenes
Neural networks (Computer science)
Embedded computer systems
Mobile apps
- Rights
- openAccess
- License
- Derechos Reservados - Universidad Autónoma de Occidente
id |
REPOUAO2_53b94c251b4ee3ba68111c61768de25a |
---|---|
oai_identifier_str |
oai:red.uao.edu.co:10614/11083 |
network_acronym_str |
REPOUAO2 |
network_name_str |
RED: Repositorio Educativo Digital UAO |
repository_id_str |
|
dc.title.spa.fl_str_mv |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
title |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
spellingShingle |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras Ingeniería Mecatrónica Redes neurales (Computadores) Sistemas de computador embebidos Aplicaciones móviles Procesamiento de imágenes Neural networks (Computer science) Embedded computer systems Mobile apps |
title_short |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
title_full |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
title_fullStr |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
title_full_unstemmed |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
title_sort |
Aprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verduras |
dc.creator.fl_str_mv |
Muñoz Bocanegra, Ricardo |
dc.contributor.advisor.none.fl_str_mv |
López Sotelo, Jesús Alfonso |
dc.contributor.author.spa.fl_str_mv |
Muñoz Bocanegra, Ricardo |
dc.subject.spa.fl_str_mv |
Ingeniería Mecatrónica Redes neurales (Computadores) Sistemas de computador embebidos Aplicaciones móviles Procesamiento de imágenes |
topic |
Ingeniería Mecatrónica Redes neurales (Computadores) Sistemas de computador embebidos Aplicaciones móviles Procesamiento de imágenes Neural networks (Computer science) Embedded computer systems Mobile apps |
dc.subject.eng.fl_str_mv |
Neural networks (Computer science) Embedded computer systems Mobile apps |
description |
This project aims to train deep neural networks to recognize around 20 fruits and vegetables using a camera attached to portable devices (Smartphones and embedded systems). We built an acquisition system to gather pictures of different kinds of fruits and vegetables and collected further images from the Internet to train a Convolutional neural network. Instead of defining a new topology and training it from scratch, we took advantage of transfer learning and fine-tuned MobileNet models to classify our images in their corresponding classes. We also trained a lighter model and embedded it both, on a smartphone and on embedded systems (Raspberry pi 3+ and Jetson TX2 development kit). Last but not least, we developed a smartphone application to provide valuable information regarding the fruits and vegetables. |
publishDate |
2019 |
dc.date.accessioned.spa.fl_str_mv |
2019-09-10T15:37:27Z |
dc.date.available.spa.fl_str_mv |
2019-09-10T15:37:27Z |
dc.date.issued.spa.fl_str_mv |
2019-05-23 |
dc.type.spa.fl_str_mv |
Trabajo de grado - Pregrado |
dc.type.coarversion.fl_str_mv |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
dc.type.coar.spa.fl_str_mv |
http://purl.org/coar/resource_type/c_7a1f |
dc.type.content.spa.fl_str_mv |
Text |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/bachelorThesis |
dc.type.redcol.spa.fl_str_mv |
https://purl.org/redcol/resource_type/TP |
dc.type.version.spa.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
format |
http://purl.org/coar/resource_type/c_7a1f |
status_str |
publishedVersion |
dc.identifier.uri.spa.fl_str_mv |
http://hdl.handle.net/10614/11083 |
url |
http://hdl.handle.net/10614/11083 |
dc.language.iso.spa.fl_str_mv |
spa |
language |
spa |
dc.rights.spa.fl_str_mv |
Derechos Reservados - Universidad Autónoma de Occidente |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.uri.spa.fl_str_mv |
https://creativecommons.org/licenses/by/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.creativecommons.spa.fl_str_mv |
Atribución 4.0 Internacional (CC BY 4.0) |
rights_invalid_str_mv |
Derechos Reservados - Universidad Autónoma de Occidente https://creativecommons.org/licenses/by/4.0/ Atribución 4.0 Internacional (CC BY 4.0) http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.spa.fl_str_mv |
application/pdf |
dc.format.extent.spa.fl_str_mv |
81 páginas |
dc.coverage.spatial.spa.fl_str_mv |
Universidad Autónoma de Occidente. Calle 25 115-85. Km 2 vía Cali-Jamundí |
dc.publisher.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.publisher.program.spa.fl_str_mv |
Ingeniería Mecatrónica |
dc.publisher.department.spa.fl_str_mv |
Departamento de Automática y Electrónica |
dc.publisher.faculty.spa.fl_str_mv |
Facultad de Ingeniería |
dc.source.spa.fl_str_mv |
instname:Universidad Autónoma de Occidente reponame:Repositorio Institucional UAO |
instname_str |
Universidad Autónoma de Occidente |
institution |
Universidad Autónoma de Occidente |
reponame_str |
Repositorio Institucional UAO |
collection |
Repositorio Institucional UAO |
dc.source.bibliographiccitation.spa.fl_str_mv |
Paszke A. (2016). ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. Noviembre 23 de 2018, de Cornell University recuperado de:: https://arxiv.org/pdf/1606.02147.pdf. Agricultura sensitiva. (2002). Empaque para vegetales y frutas frescas. Agricultura sensitiva, recuperado de: http://www.angelfire.com/ia2/ingenieriaagricola/empaques.htm. Atwell C. (2013). The biggest-little revolution: 10 single-board computers for under $100. EDN network recuperado de: https://www.edn.com/design/diy/4419990/The-biggest-little-revolution-- 10-single-board-computers-for-under--100. Banco Mundial. (2013). Obesidad en América Latina: un problema creciente. Banco Mundial.org, recuperado de: http://www.bancomundial.org/es/news/feature/2013/03/27/crece- obesidad-america-latina. BRUNNER D. (2018). Frame Rate: A Beginner’s Guide. TechSmith recuperado de: https://www.techsmith.com/blog/frame-rate-beginners-guide/. Chimp C. (2017). CEO de Nvidia: “GPUs reemplazarán la CPU, la ley de Moore está muerta”. 27. Masgamers.com recuperado de: http://www.masgamers.com/nvidia-ley-de-moore-gpu. Daoust M. (2018). Tensorflow for poets 2. Tensorflow recuperado de: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets- 2/#0. Diamond systems corp (2008). COM-based SBCs. The superior architecture for small form factor embedded systems. Diamond systems corporation, recuperado de: http://whitepaper.opsy.st/WhitePaper.diamondsys- combased-sbcs-wpfinal-.pdf. EFE. (2018). Colombia recicla el 17% de los 12 millones de toneladas de residuos. Portafolio, 1. Eggplant. (2018), Wikipedia recuperado de: https://en.wikipedia.org/wiki/Eggplant. Escontrela A. (2018). Convolutional Neural Networks from the ground up. Towards Data Science, recuperado de: https://towardsdatascience.com/convolutional-neural-networks-from-the- ground-up-c67bb41454e1. Fruits and vegies more matters. Fruit and vegetable nutrition database. for better health foundation recuperado de: https://www.fruitsandveggiesmorematters.org/fruit-nutrition-database. Fortuner B., Viana M. y Bahrgav K. (2018). Loss functions. GitHub, recuperado de: recuperado de: https://github.com/bfortuner/ml- cheatsheet/blob/master/docs/loss_functions.rst. Galvin J, Murphy S, Crocker S. Freed N. (1995). Security Multiparts for MIME: Multipart/Signed and Multipart/Encrypted. Innosoft International, Inc recuperado de: https://tools.ietf.org/html/rfc1847. Godoy D. (2018). Understanding binary cross-entropy / log loss: a visual explanation. Towards datascience, recuperado de: https://towardsdatascience.com/understanding-binary-cross-entropy-log- loss-a-visual-explanation-a3ac6025181a. Goodfellow I., Bengio Y. y Courville A. (2016). Introduction. En Deep learning (1- 26).: MIT press. Google. (2018). TensorFlow.js. Google, recuperado de: https://js.tensorflow.org/. GSMarena. (2016). Motorola G4 play, features. GSMarena, recuperado de: https://www.gsmarena.com/motorola_moto_g4_play-8104.php Guesmi L., Fathallah H. y Menif M. (2018). Modulation Format Recognition Using Artificial Neural Networks for the Next Generation Optical Networks. , recuperado de: 10.5772/intechopen.70954. Guillermo J. (2014). Las redes neuronales: qué son y por qué están volviendo. Xataca, recuperado de: https://www.xataka.com/robotica-e-ia/las-redes- neuronales-que-son-y-por-que-estan-volviendo. Heath S. (2003). Embedded systems design. EDN series for design engineers (2 ed.). Newnes. p. 2. ISBN 978-0-7506-5546-0. Hewlett Packard. Notebook HP 14-d043la: Especificaciones del producto. Hewlett Packard, recuperado de: https://support.hp.com/co- es/document/c04091202. IBM. Protocolos TCP/IP. IBM, recuperado de: https://www.ibm.com/support/knowledgecenter/es/ssw_aix_72/com.ibm.a ix.networkcomm/tcpip_protocols.htm. Introducing JSON. (2015). JSON, recuperado de: https://www.json.org/. Intel. (2017). Tecnología Intel Realsense. Intel, recuperado de: https://software.intel.com/es-es/realsense/sr300. James L. (2018). The 8 Neural Network Architectures Machine Learning Researchers Need to Learn. KDnuggets, recuperado de: https://www.kdnuggets.com/2018/02/8-neural-network-architectures- machine-learning-researchers-need-learn.html. JetsonHacks. (2018). JetsonHacks, developing for NVIDIA Jetson. JetsonHacks.com, recuperado de: https://www.jetsonhacks.com/category/deep-learning/. Jingwen. (2018). Getting Started with Bazel. GitHub Sitio, recuperado de: https://github.com/bazelbuild/bazel/blob/master/site/docs/getting- started.md. Khademi F. y SAYED M. (2016). Predicting the 28 Days Compressive Strength of Concrete Using Artificial Neural Network. i-manager's Journal on Civil Engineering. 6. 10.26634/jce.6.2.5936. Larabel M. (2017). Benchmarks of Many ARM Boards from The Raspberry Pi To NVIDIA Jetson TX2. Phoronix, recuperado de: https://openbenchmarking.org/embed.php?i=1703199-RI- ARMYARM4104&sha=56920d0&p=2. Lavigne D. (1976). Counting Harp Seals with ultra-violet photography. Polar Record, 18(114), 269-277. doi:10.1017/S0032247400000310. Lomont C. (2011). Introduction toIntel®Advanced Vector Extensions. Intel, recuperado de: https://software.intel.com/sites/default/files/m/d/4/1/d/8/Intro_to_Intel_AV X.pdf. López R. (2014). ¿Qué es y cómo funciona “Deep Learning”? , recuperado de: https://rubenlopezg.wordpress.com/2014/05/07/que-es-y-como-funciona- deep-learning/. Lyubova N. (2016). SR300 support in ROS + Distorted Pointcloud. GitHub recuperado de: https://github.com/IntelRealSense/librealsense/issues/61#issuecomment- 194747954. Machine Learning guru. (2017). Image Filtering. Machne Learning guru recuperado de: http://machinelearninguru.com/computer_vision/basics/convolution/imag e_convolution_1.html. Mcculloch W y Pitts W. A logical calculus of the ideas immanent in nervousactivity. En: Bulletin of mathematical biophysics. No 5. (1943). Mysid. (2007). Image Scaling. Wikipedia recuperado de: https://en.wikipedia.org/wiki/Image_scaling#/media/File:2xsai_example.p ng. Nvidia. (2018). JetPack 3.2.1 Release Notes. Nvidia recuperado de: https://developer.nvidia.com/embedded/jetpack-3_2_1. Nvidia. (2017). Nvidia Jetson systems: Module technical specifications. Nvidia corporation recuperado de: https://www.nvidia.com/en-us/autonomous- machines/embedded-systems-dev-kits-modules/. Olshausen B. y Field D. How close are we understanding V1? En Neural Computation . 8 (2015). Organización de Naciones Unidas. (2018). Más hambrientos y más obesos en América Latina en medio de la desigualdad. Organización de Naciones Unidas recuperado de: https://news.un.org/es/story/2018/11/1445101. Palacios F. (2003). Herramientas en GNU/Linux para estudiantes universitarios: Capítulos 2 y 3. ibiblio.org recuperado de: https://www.ibiblio.org/pub/linux/docs/LuCaS/Presentaciones/200304curs o-glisa/redes_neuronales/curso-glisa-redes_neuronales-html/x69.html. Perez C. (2015). Why are GPUs well-suited to deep learning? Quora recuperado de: https://www.quora.com/Why-are-GPUs-well-suited-to-deep-learning. Pokharna H. (2016). The best explanation of Convolutional Neural Networks on the Internet! Medium recuperado de: https://medium.com/technologymadeeasy/the-best-explanation-of- convolutional-neural-networks-on-the-internet-fbb8b1ad5df8. Raspberry pi. (2012). Raspberry pi 3 model b plus: Specifications. Raspberry pi foundation recuperado de: https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/ Redacción tecnológica. (2018). Seeing AI, la aplicación que le describe el mundo a quienes no ven. El espectador recuperado de: https://www.elespectador.com/tecnologia/seeing-ai-la-aplicacion-que-le- describe-el-mundo-quienes-no-ven-articulo-73914. Instituto Nacional de Salud Pública de México. (2016). Review of current labelling regulations and practices for food and beverage targeting children and adolescents in Latin America countries (Mexico, Chile, Costa Rica and Argentina) and recommendations for facilitating consumer information. UNICEF recuperado de:https://www.unicef.org/ecuador/english/20161122_UNICEF_LACRO_ Labeling_Report_LR(3).pdf. Sehgal A y Kehtarnavaz N. (2019). Guidelines and Benchmarks for Deployment of Deep Learning Models on Smartphones as Real-Time Apps. Arxiv recuperado de: https://arxiv.org/ftp/arxiv/papers/1901/1901.02144.pdf Sistema de etiquetado frontal de alimentos y bebidas para México: una estrategiapara la toma de decisiones saludables. (2018). Salud Pública de México recuperado de: http://saludpublica.mx/index.php/spm/article/view/9615/11536. Sistema de etiquetado de alimetos procesados. Ministerio de salud públida de Ecuador recuperado de: http://instituciones.msp.gob.ec/images/Documentos/infografia2.pdf. Stanford University. (2015). ConvNetJS deep Learning in your browser. Stanford University recuperado de: https://cs.stanford.edu/people/karpathy/convnetjs/. Sunasra M. (2017). Performance Metrics for Classification problems in Machine Learning. Medium recuperado de: https://medium.com/thalus- ai/performance-metrics-for-classification-problems-in-machine-learning- part-i-b085d432082b. Tensorflow. (2018). Introduction to TensorFlow Mobile. Google recuperado de: https://www.tensorflow.org/mobile/mobile_intro. United States Department of agriculture. USDA Food Composition Databases. USDA recuperado de: https://ndb.nal.usda.gov/ndb/search/list. Willis, K. (2018). Augmented reality system lets doctors see under patients' skin without the scalpel: New technology lets clinicians see patients' internal anatomy displayed right on the body. ScienceDaily, recuperado de: www.sciencedaily.com/releases/2018/01/180124172408.htm. Villagómez C. Protocolo HTTP (2014). CMM recuperado de: https://es.ccm.net/contents/264-el-protocolo-http. Visem. Changing the contrast and brightness of an image!, de Opencv recuperado de: https://docs.opencv.org/3.4/d3/dc1/tutorial_basic_linear_transform.html. Wikipedia. API sandbox. Wikipedia recuperado de: https://en.wikipedia.org/wiki/Special:ApiSandbox. |
bitstream.url.fl_str_mv |
https://red.uao.edu.co/bitstreams/e1482c19-5028-4c29-bdf2-efe9368965a0/download https://red.uao.edu.co/bitstreams/5e918d5d-6d93-4a92-93a8-98c586714026/download https://red.uao.edu.co/bitstreams/7ff7392f-310b-44f4-b28e-0499052e032d/download https://red.uao.edu.co/bitstreams/38de35b5-b149-42bc-808b-266f71bf8899/download https://red.uao.edu.co/bitstreams/50b6c788-8015-49ce-8122-cd433babb626/download https://red.uao.edu.co/bitstreams/7c841c18-8811-420f-80d8-7251da696e5b/download https://red.uao.edu.co/bitstreams/cdb8f96b-97be-4d24-a9ad-72114d81851c/download https://red.uao.edu.co/bitstreams/95e6ffbd-b03f-4b95-884d-e4b0b2b46796/download |
bitstream.checksum.fl_str_mv |
01cb9f1b40b2d63926e410e1fc449479 e1c06d85ae7b8b032bef47e42e4c08f9 56e02ad3901e8699b09025b29b92c4d8 672f0e99e311a79d8a74d5c0a778feff 0175ea4a2d4caec4bbcc37e300941108 20b5ba22b1117f71589c7318baa2c560 8af726454f921ee225a67cfc35bdd74c 3d86357e620e0bd1327a590e66ccec48 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Digital Universidad Autonoma de Occidente |
repository.mail.fl_str_mv |
repositorio@uao.edu.co |
_version_ |
1814259818189291520 |
spelling |
López Sotelo, Jesús Alfonsovirtual::2925-1Muñoz Bocanegra, Ricardo9481fd9127121254103abbd57c9e87d3-1Ingeniero MecatrónicoUniversidad Autónoma de Occidente. Calle 25 115-85. Km 2 vía Cali-Jamundí2019-09-10T15:37:27Z2019-09-10T15:37:27Z2019-05-23http://hdl.handle.net/10614/11083This project aims to train deep neural networks to recognize around 20 fruits and vegetables using a camera attached to portable devices (Smartphones and embedded systems). We built an acquisition system to gather pictures of different kinds of fruits and vegetables and collected further images from the Internet to train a Convolutional neural network. Instead of defining a new topology and training it from scratch, we took advantage of transfer learning and fine-tuned MobileNet models to classify our images in their corresponding classes. We also trained a lighter model and embedded it both, on a smartphone and on embedded systems (Raspberry pi 3+ and Jetson TX2 development kit). Last but not least, we developed a smartphone application to provide valuable information regarding the fruits and vegetables.El propósito de este proyecto es entrenar redes neuronales profundas para clasificar alrededor de 20 frutas y vegetales usando imágenes provenientes de las cámaras de un ordenador y algunos dispositivos móviles (teléfonos inteligentes y sistemas embebidos). Se construyó un sistema de adquisición para recolectar imágenes de diferentes tipos de frutas y vegetales, también se utilizaron algunas imágenes de internet para reforzar la eficacia del dataset para re-entrenar redes neuronales convolucionales. En vez de construir una nueva red neuronal desde el principio, se tomó ventaja del transfer learning para reentrenar un modelo de MobileNet para clasificar las imágenes en cuestión entre sus clases correspondientes. También se hizo uso de la herramienta de podado de operaciones innecesarias para inferencia y aproximación de pesos sinápticos para obtener un modelo más liviano capaz de ser puesto en marcha en un teléfono móvil y en un sistema embebido. Adicionalmente, se desarrolló una aplicación móvil para mostrar información importante acerca del alimento reconocido por la red. Por último, se implementó el sistema de inferencia en dos dispositivos embebidos: Raspberry pi 3+ y el kit de desarrollo Jetson TX2 de Nvidia.Proyecto de grado (Ingeniero Mecatrónico)-- Universidad Autónoma de Occidente, 2019PregradoIngeniero(a) Mecatrónico(a)application/pdf81 páginasspaUniversidad Autónoma de OccidenteIngeniería MecatrónicaDepartamento de Automática y ElectrónicaFacultad de IngenieríaDerechos Reservados - Universidad Autónoma de Occidentehttps://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccessAtribución 4.0 Internacional (CC BY 4.0)http://purl.org/coar/access_right/c_abf2instname:Universidad Autónoma de Occidentereponame:Repositorio Institucional UAOPaszke A. (2016). ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. Noviembre 23 de 2018, de Cornell University recuperado de:: https://arxiv.org/pdf/1606.02147.pdf. Agricultura sensitiva. (2002). Empaque para vegetales y frutas frescas. Agricultura sensitiva, recuperado de: http://www.angelfire.com/ia2/ingenieriaagricola/empaques.htm. Atwell C. (2013). The biggest-little revolution: 10 single-board computers for under $100. EDN network recuperado de: https://www.edn.com/design/diy/4419990/The-biggest-little-revolution-- 10-single-board-computers-for-under--100. Banco Mundial. (2013). Obesidad en América Latina: un problema creciente. Banco Mundial.org, recuperado de: http://www.bancomundial.org/es/news/feature/2013/03/27/crece- obesidad-america-latina. BRUNNER D. (2018). Frame Rate: A Beginner’s Guide. TechSmith recuperado de: https://www.techsmith.com/blog/frame-rate-beginners-guide/. Chimp C. (2017). CEO de Nvidia: “GPUs reemplazarán la CPU, la ley de Moore está muerta”. 27. Masgamers.com recuperado de: http://www.masgamers.com/nvidia-ley-de-moore-gpu. Daoust M. (2018). Tensorflow for poets 2. Tensorflow recuperado de: https://codelabs.developers.google.com/codelabs/tensorflow-for-poets- 2/#0. Diamond systems corp (2008). COM-based SBCs. The superior architecture for small form factor embedded systems. Diamond systems corporation, recuperado de: http://whitepaper.opsy.st/WhitePaper.diamondsys- combased-sbcs-wpfinal-.pdf. EFE. (2018). Colombia recicla el 17% de los 12 millones de toneladas de residuos. Portafolio, 1. Eggplant. (2018), Wikipedia recuperado de: https://en.wikipedia.org/wiki/Eggplant. Escontrela A. (2018). Convolutional Neural Networks from the ground up. Towards Data Science, recuperado de: https://towardsdatascience.com/convolutional-neural-networks-from-the- ground-up-c67bb41454e1. Fruits and vegies more matters. Fruit and vegetable nutrition database. for better health foundation recuperado de: https://www.fruitsandveggiesmorematters.org/fruit-nutrition-database. Fortuner B., Viana M. y Bahrgav K. (2018). Loss functions. GitHub, recuperado de: recuperado de: https://github.com/bfortuner/ml- cheatsheet/blob/master/docs/loss_functions.rst. Galvin J, Murphy S, Crocker S. Freed N. (1995). Security Multiparts for MIME: Multipart/Signed and Multipart/Encrypted. Innosoft International, Inc recuperado de: https://tools.ietf.org/html/rfc1847. Godoy D. (2018). Understanding binary cross-entropy / log loss: a visual explanation. Towards datascience, recuperado de: https://towardsdatascience.com/understanding-binary-cross-entropy-log- loss-a-visual-explanation-a3ac6025181a. Goodfellow I., Bengio Y. y Courville A. (2016). Introduction. En Deep learning (1- 26).: MIT press. Google. (2018). TensorFlow.js. Google, recuperado de: https://js.tensorflow.org/. GSMarena. (2016). Motorola G4 play, features. GSMarena, recuperado de: https://www.gsmarena.com/motorola_moto_g4_play-8104.php Guesmi L., Fathallah H. y Menif M. (2018). Modulation Format Recognition Using Artificial Neural Networks for the Next Generation Optical Networks. , recuperado de: 10.5772/intechopen.70954. Guillermo J. (2014). Las redes neuronales: qué son y por qué están volviendo. Xataca, recuperado de: https://www.xataka.com/robotica-e-ia/las-redes- neuronales-que-son-y-por-que-estan-volviendo. Heath S. (2003). Embedded systems design. EDN series for design engineers (2 ed.). Newnes. p. 2. ISBN 978-0-7506-5546-0. Hewlett Packard. Notebook HP 14-d043la: Especificaciones del producto. Hewlett Packard, recuperado de: https://support.hp.com/co- es/document/c04091202. IBM. Protocolos TCP/IP. IBM, recuperado de: https://www.ibm.com/support/knowledgecenter/es/ssw_aix_72/com.ibm.a ix.networkcomm/tcpip_protocols.htm. Introducing JSON. (2015). JSON, recuperado de: https://www.json.org/. Intel. (2017). Tecnología Intel Realsense. Intel, recuperado de: https://software.intel.com/es-es/realsense/sr300. James L. (2018). The 8 Neural Network Architectures Machine Learning Researchers Need to Learn. KDnuggets, recuperado de: https://www.kdnuggets.com/2018/02/8-neural-network-architectures- machine-learning-researchers-need-learn.html. JetsonHacks. (2018). JetsonHacks, developing for NVIDIA Jetson. JetsonHacks.com, recuperado de: https://www.jetsonhacks.com/category/deep-learning/. Jingwen. (2018). Getting Started with Bazel. GitHub Sitio, recuperado de: https://github.com/bazelbuild/bazel/blob/master/site/docs/getting- started.md. Khademi F. y SAYED M. (2016). Predicting the 28 Days Compressive Strength of Concrete Using Artificial Neural Network. i-manager's Journal on Civil Engineering. 6. 10.26634/jce.6.2.5936. Larabel M. (2017). Benchmarks of Many ARM Boards from The Raspberry Pi To NVIDIA Jetson TX2. Phoronix, recuperado de: https://openbenchmarking.org/embed.php?i=1703199-RI- ARMYARM4104&sha=56920d0&p=2. Lavigne D. (1976). Counting Harp Seals with ultra-violet photography. Polar Record, 18(114), 269-277. doi:10.1017/S0032247400000310. Lomont C. (2011). Introduction toIntel®Advanced Vector Extensions. Intel, recuperado de: https://software.intel.com/sites/default/files/m/d/4/1/d/8/Intro_to_Intel_AV X.pdf. López R. (2014). ¿Qué es y cómo funciona “Deep Learning”? , recuperado de: https://rubenlopezg.wordpress.com/2014/05/07/que-es-y-como-funciona- deep-learning/. Lyubova N. (2016). SR300 support in ROS + Distorted Pointcloud. GitHub recuperado de: https://github.com/IntelRealSense/librealsense/issues/61#issuecomment- 194747954. Machine Learning guru. (2017). Image Filtering. Machne Learning guru recuperado de: http://machinelearninguru.com/computer_vision/basics/convolution/imag e_convolution_1.html. Mcculloch W y Pitts W. A logical calculus of the ideas immanent in nervousactivity. En: Bulletin of mathematical biophysics. No 5. (1943). Mysid. (2007). Image Scaling. Wikipedia recuperado de: https://en.wikipedia.org/wiki/Image_scaling#/media/File:2xsai_example.p ng. Nvidia. (2018). JetPack 3.2.1 Release Notes. Nvidia recuperado de: https://developer.nvidia.com/embedded/jetpack-3_2_1. Nvidia. (2017). Nvidia Jetson systems: Module technical specifications. Nvidia corporation recuperado de: https://www.nvidia.com/en-us/autonomous- machines/embedded-systems-dev-kits-modules/. Olshausen B. y Field D. How close are we understanding V1? En Neural Computation . 8 (2015). Organización de Naciones Unidas. (2018). Más hambrientos y más obesos en América Latina en medio de la desigualdad. Organización de Naciones Unidas recuperado de: https://news.un.org/es/story/2018/11/1445101. Palacios F. (2003). Herramientas en GNU/Linux para estudiantes universitarios: Capítulos 2 y 3. ibiblio.org recuperado de: https://www.ibiblio.org/pub/linux/docs/LuCaS/Presentaciones/200304curs o-glisa/redes_neuronales/curso-glisa-redes_neuronales-html/x69.html. Perez C. (2015). Why are GPUs well-suited to deep learning? Quora recuperado de: https://www.quora.com/Why-are-GPUs-well-suited-to-deep-learning. Pokharna H. (2016). The best explanation of Convolutional Neural Networks on the Internet! Medium recuperado de: https://medium.com/technologymadeeasy/the-best-explanation-of- convolutional-neural-networks-on-the-internet-fbb8b1ad5df8. Raspberry pi. (2012). Raspberry pi 3 model b plus: Specifications. Raspberry pi foundation recuperado de: https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/ Redacción tecnológica. (2018). Seeing AI, la aplicación que le describe el mundo a quienes no ven. El espectador recuperado de: https://www.elespectador.com/tecnologia/seeing-ai-la-aplicacion-que-le- describe-el-mundo-quienes-no-ven-articulo-73914. Instituto Nacional de Salud Pública de México. (2016). Review of current labelling regulations and practices for food and beverage targeting children and adolescents in Latin America countries (Mexico, Chile, Costa Rica and Argentina) and recommendations for facilitating consumer information. UNICEF recuperado de:https://www.unicef.org/ecuador/english/20161122_UNICEF_LACRO_ Labeling_Report_LR(3).pdf. Sehgal A y Kehtarnavaz N. (2019). Guidelines and Benchmarks for Deployment of Deep Learning Models on Smartphones as Real-Time Apps. Arxiv recuperado de: https://arxiv.org/ftp/arxiv/papers/1901/1901.02144.pdf Sistema de etiquetado frontal de alimentos y bebidas para México: una estrategiapara la toma de decisiones saludables. (2018). Salud Pública de México recuperado de: http://saludpublica.mx/index.php/spm/article/view/9615/11536. Sistema de etiquetado de alimetos procesados. Ministerio de salud públida de Ecuador recuperado de: http://instituciones.msp.gob.ec/images/Documentos/infografia2.pdf. Stanford University. (2015). ConvNetJS deep Learning in your browser. Stanford University recuperado de: https://cs.stanford.edu/people/karpathy/convnetjs/. Sunasra M. (2017). Performance Metrics for Classification problems in Machine Learning. Medium recuperado de: https://medium.com/thalus- ai/performance-metrics-for-classification-problems-in-machine-learning- part-i-b085d432082b. Tensorflow. (2018). Introduction to TensorFlow Mobile. Google recuperado de: https://www.tensorflow.org/mobile/mobile_intro. United States Department of agriculture. USDA Food Composition Databases. USDA recuperado de: https://ndb.nal.usda.gov/ndb/search/list. Willis, K. (2018). Augmented reality system lets doctors see under patients' skin without the scalpel: New technology lets clinicians see patients' internal anatomy displayed right on the body. ScienceDaily, recuperado de: www.sciencedaily.com/releases/2018/01/180124172408.htm. Villagómez C. Protocolo HTTP (2014). CMM recuperado de: https://es.ccm.net/contents/264-el-protocolo-http. Visem. Changing the contrast and brightness of an image!, de Opencv recuperado de: https://docs.opencv.org/3.4/d3/dc1/tutorial_basic_linear_transform.html. Wikipedia. API sandbox. Wikipedia recuperado de: https://en.wikipedia.org/wiki/Special:ApiSandbox.Ingeniería MecatrónicaRedes neurales (Computadores)Sistemas de computador embebidosAplicaciones móvilesProcesamiento de imágenesNeural networks (Computer science)Embedded computer systemsMobile appsAprendizaje profundo en dispositivo portable para el reconocimiento de frutas y verdurasTrabajo de grado - Pregradohttp://purl.org/coar/resource_type/c_7a1fTextinfo:eu-repo/semantics/bachelorThesishttps://purl.org/redcol/resource_type/TPinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a85Publicationhttps://scholar.google.com.au/citations?user=7PIjh_MAAAAJ&hl=envirtual::2925-10000-0002-9731-8458virtual::2925-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000249106virtual::2925-1fc227fb1-22ec-47f0-afe7-521c61fddd32virtual::2925-1fc227fb1-22ec-47f0-afe7-521c61fddd32virtual::2925-1TEXTT08596.pdf.txtT08596.pdf.txtExtracted texttext/plain116183https://red.uao.edu.co/bitstreams/e1482c19-5028-4c29-bdf2-efe9368965a0/download01cb9f1b40b2d63926e410e1fc449479MD57TA8596.pdf.txtTA8596.pdf.txtExtracted texttext/plain2https://red.uao.edu.co/bitstreams/5e918d5d-6d93-4a92-93a8-98c586714026/downloade1c06d85ae7b8b032bef47e42e4c08f9MD59THUMBNAILT08596.pdf.jpgT08596.pdf.jpgGenerated Thumbnailimage/jpeg5477https://red.uao.edu.co/bitstreams/7ff7392f-310b-44f4-b28e-0499052e032d/download56e02ad3901e8699b09025b29b92c4d8MD58TA8596.pdf.jpgTA8596.pdf.jpgGenerated Thumbnailimage/jpeg11848https://red.uao.edu.co/bitstreams/38de35b5-b149-42bc-808b-266f71bf8899/download672f0e99e311a79d8a74d5c0a778feffMD510CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8908https://red.uao.edu.co/bitstreams/50b6c788-8015-49ce-8122-cd433babb626/download0175ea4a2d4caec4bbcc37e300941108MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-81665https://red.uao.edu.co/bitstreams/7c841c18-8811-420f-80d8-7251da696e5b/download20b5ba22b1117f71589c7318baa2c560MD54ORIGINALT08596.pdfT08596.pdfapplication/pdf1383130https://red.uao.edu.co/bitstreams/cdb8f96b-97be-4d24-a9ad-72114d81851c/download8af726454f921ee225a67cfc35bdd74cMD55TA8596.pdfTA8596.pdfapplication/pdf320735https://red.uao.edu.co/bitstreams/95e6ffbd-b03f-4b95-884d-e4b0b2b46796/download3d86357e620e0bd1327a590e66ccec48MD5610614/11083oai:red.uao.edu.co:10614/110832024-03-07 16:47:00.367https://creativecommons.org/licenses/by/4.0/Derechos Reservados - Universidad Autónoma de Occidenteopen.accesshttps://red.uao.edu.coRepositorio Digital Universidad Autonoma de Occidenterepositorio@uao.edu.coRUwgQVVUT1IgYXV0b3JpemEgYSBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgT2NjaWRlbnRlLCBkZSBmb3JtYSBpbmRlZmluaWRhLCBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgbGEgTGV5IDQ0IGRlIDE5OTMsIGxhIERlY2lzacOzbiBhbmRpbmEgMzUxIGRlIDE5OTMsIGVsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbGV5ZXMgeSBqdXJpc3BydWRlbmNpYSB2aWdlbnRlIGFsIHJlc3BlY3RvLCBoYWdhIHB1YmxpY2FjacOzbiBkZSBlc3RlIGNvbiBmaW5lcyBlZHVjYXRpdm9zLiBQQVJBR1JBRk86IEVzdGEgYXV0b3JpemFjacOzbiBhZGVtw6FzIGRlIHNlciB2w6FsaWRhIHBhcmEgbGFzIGZhY3VsdGFkZXMgeSBkZXJlY2hvcyBkZSB1c28gc29icmUgbGEgb2JyYSBlbiBmb3JtYXRvIG8gc29wb3J0ZSBtYXRlcmlhbCwgdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGRpZ2l0YWwsIGVsZWN0csOzbmljbywgdmlydHVhbCwgcGFyYSB1c29zIGVuIHJlZCwgSW50ZXJuZXQsIGV4dHJhbmV0LCBpbnRyYW5ldCwgYmlibGlvdGVjYSBkaWdpdGFsIHkgZGVtw6FzIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gRUwgQVVUT1IsIGV4cHJlc2EgcXVlIGVsIGRvY3VtZW50byAodHJhYmFqbyBkZSBncmFkbywgcGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIGVsYWJvcsOzIHNpbiBxdWVicmFudGFyIG5pIHN1cGxhbnRhciBsb3MgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHkgZGUgdGFsIGZvcm1hLCBlbCBkb2N1bWVudG8gKHRyYWJham8gZGUgZ3JhZG8sIHBhc2FudMOtYSwgY2Fzb3MgbyB0ZXNpcykgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgw6lzdGUuIFBBUkFHUkFGTzogZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBhbGd1bmEgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybywgcmVmZXJlbnRlIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGVsIGRvY3VtZW50byAoVHJhYmFqbyBkZSBncmFkbywgUGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBlbiBjdWVzdGnDs24sIEVMIEFVVE9SLCBhc3VtaXLDoSBsYSByZXNwb25zYWJpbGlkYWQgdG90YWwsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvczsgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgbGEgVW5pdmVyc2lkYWQgIEF1dMOzbm9tYSBkZSBPY2NpZGVudGUgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4gVG9kYSBwZXJzb25hIHF1ZSBjb25zdWx0ZSB5YSBzZWEgZW4gbGEgYmlibGlvdGVjYSBvIGVuIG1lZGlvIGVsZWN0csOzbmljbyBwb2Ryw6EgY29waWFyIGFwYXJ0ZXMgZGVsIHRleHRvIGNpdGFuZG8gc2llbXByZSBsYSBmdWVudGUsIGVzIGRlY2lyIGVsIHTDrXR1bG8gZGVsIHRyYWJham8geSBlbCBhdXRvci4gRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGllbmUgRUwgQVVUT1IgZGUgcHVibGljYXIgdG90YWwgbyBwYXJjaWFsbWVudGUgbGEgb2JyYS4K |