Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning
ilustraciones, fotografías, gráficas, tablas
- Autores:
-
Roa García, Fabio Andrés
- Tipo de recurso:
- Fecha de publicación:
- 2021
- Institución:
- Universidad Nacional de Colombia
- Repositorio:
- Universidad Nacional de Colombia
- Idioma:
- spa
- OAI Identifier:
- oai:repositorio.unal.edu.co:unal/80979
- Palabra clave:
- 000 - Ciencias de la computación, información y obras generales::003 - Sistemas
Neural networks (Computer science)
Machine learning
Optical data processing
Redes neurales
Aprendizaje automático (Inteligencia artificial)
Procesamiento óptico de datos
CNN
OpenCV
Dlib
Face recognition
Deep learning
Transfer learning
Deep residual learning
KNN
Aprendizaje profundo
Reconocimiento facial
Transferencia de aprendizaje
Aprendizaje residual profundo
k vecinos más próximos
Redes neuronales convolucionales
- Rights
- openAccess
- License
- Reconocimiento 4.0 Internacional
id |
UNACIONAL2_775c1cf3ed37014083a6d269cb91bb13 |
---|---|
oai_identifier_str |
oai:repositorio.unal.edu.co:unal/80979 |
network_acronym_str |
UNACIONAL2 |
network_name_str |
Universidad Nacional de Colombia |
repository_id_str |
|
dc.title.spa.fl_str_mv |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
dc.title.translated.eng.fl_str_mv |
Implement a face recognition and identification system on video sequences through a model of Convolutional Neural Networks and Transfer Learning |
title |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
spellingShingle |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning 000 - Ciencias de la computación, información y obras generales::003 - Sistemas Neural networks (Computer science) Machine learning Optical data processing Redes neurales Aprendizaje automático (Inteligencia artificial) Procesamiento óptico de datos CNN OpenCV Dlib Face recognition Deep learning Transfer learning Deep residual learning KNN Aprendizaje profundo Reconocimiento facial Transferencia de aprendizaje Aprendizaje residual profundo k vecinos más próximos Redes neuronales convolucionales |
title_short |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
title_full |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
title_fullStr |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
title_full_unstemmed |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
title_sort |
Implementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer Learning |
dc.creator.fl_str_mv |
Roa García, Fabio Andrés |
dc.contributor.advisor.spa.fl_str_mv |
Niño Vásquez, Luis Fernando |
dc.contributor.author.spa.fl_str_mv |
Roa García, Fabio Andrés |
dc.contributor.researchgroup.spa.fl_str_mv |
laboratorio de Investigación en Sistemas Inteligentes Lisi |
dc.subject.ddc.spa.fl_str_mv |
000 - Ciencias de la computación, información y obras generales::003 - Sistemas |
topic |
000 - Ciencias de la computación, información y obras generales::003 - Sistemas Neural networks (Computer science) Machine learning Optical data processing Redes neurales Aprendizaje automático (Inteligencia artificial) Procesamiento óptico de datos CNN OpenCV Dlib Face recognition Deep learning Transfer learning Deep residual learning KNN Aprendizaje profundo Reconocimiento facial Transferencia de aprendizaje Aprendizaje residual profundo k vecinos más próximos Redes neuronales convolucionales |
dc.subject.lemb.eng.fl_str_mv |
Neural networks (Computer science) Machine learning Optical data processing |
dc.subject.lemb.spa.fl_str_mv |
Redes neurales Aprendizaje automático (Inteligencia artificial) Procesamiento óptico de datos |
dc.subject.proposal.eng.fl_str_mv |
CNN OpenCV Dlib Face recognition Deep learning Transfer learning Deep residual learning |
dc.subject.proposal.fra.fl_str_mv |
KNN |
dc.subject.proposal.spa.fl_str_mv |
Aprendizaje profundo Reconocimiento facial Transferencia de aprendizaje Aprendizaje residual profundo k vecinos más próximos Redes neuronales convolucionales |
description |
ilustraciones, fotografías, gráficas, tablas |
publishDate |
2021 |
dc.date.issued.none.fl_str_mv |
2021-09-10 |
dc.date.accessioned.none.fl_str_mv |
2022-02-14T20:20:03Z |
dc.date.available.none.fl_str_mv |
2022-02-14T20:20:03Z |
dc.type.spa.fl_str_mv |
Trabajo de grado - Maestría |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/masterThesis |
dc.type.version.spa.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.content.spa.fl_str_mv |
Text |
dc.type.redcol.spa.fl_str_mv |
http://purl.org/redcol/resource_type/TM |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://repositorio.unal.edu.co/handle/unal/80979 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.identifier.reponame.spa.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
dc.identifier.repourl.spa.fl_str_mv |
https://repositorio.unal.edu.co/ |
url |
https://repositorio.unal.edu.co/handle/unal/80979 https://repositorio.unal.edu.co/ |
identifier_str_mv |
Universidad Nacional de Colombia Repositorio Institucional Universidad Nacional de Colombia |
dc.language.iso.spa.fl_str_mv |
spa |
language |
spa |
dc.relation.references.spa.fl_str_mv |
M. Liu and Z. Liu, “Deep Reinforcement Learning Visual-Text Attention for Multimodal Video Classification,” in 1st International Workshop on Multimodal Understanding and Learning for Embodied Applications - {MULEA} ’19, pp. 13–21. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10. pp. 1345–1359, Oct-2010. X. Ran, H. Chen, Z. Liu, and J. Chen, “Delivering Deep Learning to Mobile Devices via Offloading,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network - {VR}/{AR} Network ’17, pp. 42–47. O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, and H. Arshad, “State-of-the-art in artificial neural network applications: A survey,” vol. 4, no. 11, p. e00938, 2018. G. Szirtes, D. Szolgay, Á. Utasi, D. Takács, I. Petrás, and G. Fodor, “Facing reality: an industrial view on large scale use of facial expression analysis,” in Proceedings of the 2013 on Emotion recognition in the wild challenge and workshop - {EmotiW} ’13, pp. 1–8. G. Levi and T. Hassner, “Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns,” in Proceedings of the 2015 {ACM} on International Conference on Multimodal Interaction - {ICMI} ’15, pp. 503–510. R. Ewerth, M. Mühling, and B. Freisleben, “Robust Video Content Analysis via Transductive Learning,” vol. 3, no. 3, pp. 1–26. M. Parchami, S. Bashbaghi, and E. Granger, “{CNNs} with cross-correlation matching for face recognition in video surveillance using a single training sample per person,” in 2017 14th {IEEE} International Conference on Advanced Video and Signal Based Surveillance ({AVSS}), pp. 1–6. H. Khan, A. Atwater, and U. Hengartner, “Itus: an implicit authentication framework for android,” in Proceedings of the 20th annual international conference on Mobile computing and networking - {MobiCom} ’14, pp. 507–518. L. N. Huynh, Y. Lee, and R. K. Balan, “DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications,” in Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pp. 82–95. R. Iqbal, F. Doctor, B. More, S. Mahmud, and U. Yousuf, “Big data analytics: Computational intelligence techniques and application areas,” Technol. Forecast. Soc. Change, vol. 153, p. 119253, 2020. U. Schmidt-Erfurth, A. Sadeghipour, B. S. Gerendas, S. M. Waldstein, and H. Bogunović, “Artificial intelligence in retina,” vol. 67, pp. 1–29. M. Mittal et al., “An efficient edge detection approach to provide better edge connectivity for image analysis,” IEEE Access, vol. 7, pp. 33240–33255, 2019. D. Sirohi, N. Kumar, and P. S. Rana, “Convolutional neural networks for 5G-enabled Intelligent Transportation System : A systematic review,” vol. 153, pp. 459–498. A. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: a review,” Artif. Intell. Rev., vol. 52, no. 2, pp. 927–948, 2019. K. S. Gautam and S. K. Thangavel, “Video analytics-based intelligent surveillance system for smart buildings,” Soft Comput., vol. 23, no. 8, pp. 2813–2837, 2019. J. Yu, K. Sun, F. Gao, and S. Zhu, “Face biometric quality assessment via light CNN,” vol. 107, pp. 25–32. L. T. Nguyen-Meidine, E. Granger, M. Kiran, and L.-A. Blais-Morin, “A comparison of {CNN}-based face and head detectors for real-time video surveillance applications,” in 2017 Seventh International Conference on Image Processing Theory, Tools and Applications ({IPTA}), pp. 1–7. B. Chacua et al., “People Identification through Facial Recognition using Deep Learning,” in 2019 IEEE Latin American Conference on Computational Intelligence (LA-CCI), 2019, pp. J. Park, J. Chen, Y. K. Cho, D. Y. Kang, and B. J. Son, “CNN-based person detection using infrared images for night-time intrusion warning systems,” Sensors (Switzerland), vol. 20, no. 1, 2020. A. Bansal, C. Castillo, R. Ranjan, and R. Chellappa, “The Do’s and Don’ts for CNN-Based Face Verification,” in 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), 2017, pp. 2545–2554. J. Galbally, “A new Foe in biometrics: A narrative review of side-channel attacks,” vol. 96, p. 101902. Y. Yao, H. Li, H. Zheng, and B. Y. Zhao, “Latent Backdoor Attacks on Deep Neural Networks,” in Proceedings of the 2019 {ACM} {SIGSAC} Conference on Computer and Communications Security, pp. 2041–2055. Y. Akbulut, A. Sengur, U. Budak, and S. Ekici, “Deep learning based face liveness detection in vídeos,” in 2017 International Artificial Intelligence and Data Processing Symposium ({IDAP}), pp. 1–4. J. Zhang, W. Li, P. Ogunbona, and D. Xu, “Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective,” vol. 52, no. 1, pp. 1–38. C. X. Lu et al., “Autonomous Learning for Face Recognition in the Wild via Ambient Wireless Cues,” in The World Wide Web Conference on - {WWW} ’19, pp. 1175–1186. J. C. Hung, K.-C. Lin, and N.-X. Lai, “Recognizing learning emotion based on convolutional neural networks and transfer learning,” vol. 84, p. 105724. S. Zhang, X. Pan, Y. Cui, X. Zhao, and L. Liu, “Learning Affective Video Features for Facial Expression Recognition via Hybrid Deep Learning,” IEEE Access, vol. 7, pp. 32297–32304, 2019. C. Herrmann, T. Müller, D. Willersinn, and J. Beyerer, “Real-time person detection in low-resolution thermal infrared imagery with MSER and CNNs,” p. 99870I. F. An and Z. Liu, “Facial expression recognition algorithm based on parameter adaptive initialization of CNN and LSTM,” vol. 36, no. 3, pp. 483–498. Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Joint Face Representation Adaptation and Clustering in Vídeos,” in Computer Vision – {ECCV} 2016, vol. 9907, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Springer International Publishing, pp. 236–251. E. G. Ortiz, A. Wright, and M. Shah, “Face recognition in movie trailers via mean sequence sparse representation-based classification,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2013, pp. 3531–3538. “Privacy Protection for Life-log Video.” [Online]. Available: https://www.researchgate.net/publication/4249807_Privacy_Protection_for_Life-log_Video. [Accessed: 13-Jun-2021]. SUPERINTENDENDENCIA DE INDUSTRIA Y COMERCIO, “Proteccion de datos personales en sistemas de videovigilancia,” 2016. S. Ebrahimi Kahou, V. Michalski, K. Konda, R. Memisevic, and C. Pal, “Recurrent Neural Networks for Emotion Recognition in Video,” in Proceedings of the 2015 {ACM} on International Conference on Multimodal Interaction - {ICMI} ’15, pp. 467–474. E. Flouty, O. Zisimopoulos, and D. Stoyanov, “FaceOff: Anonymizing Vídeos in the Operating Rooms,” in {OR} 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, vol. 11041, D. Stoyanov, Z. Taylor, D. Sarikaya, J. McLeod, M. A. González Ballester, N. C. F. Codella, A. Martel, L. Maier-Hein, A. Malpani, M. A. Zenati, S. De Ribaupierre, L. Xiongbiao, T. Collins, T. Reichl, K. Drechsler, M. Erdt, M. G. Linguraru, C. Oyarzun Laura, R. Shekhar, S. Wesarg, M. E. Celebi, K. Dana, and A. Halpern, Eds. Springer International Publishing, pp. 30–38. A. Turing, “Maquinaria computacional e Inteligencia Alan Turing, 1950,” 1950. G. R. Yang and X. J. Wang, “Artificial Neural Networks for Neuroscientists: A Primer,” Neuron, vol. 107, no. 6, pp. 1048–1070, Sep. 2020. J. Singh and R. Banerjee, “A Study on Single and Multi-layer Perceptron Neural Network,” in 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), 2019. I. G. and Y. B. and A. Courville, Deep Learning. 2016. E. Stevens, L. Antiga, and T. Viehmann, “Deep Learning with PyTorch.” K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.” M. KAYA and H. Ş. BİLGE, “Deep Metric Learning: A Survey,” Symmetry 2019, Vol. 11, Page 1066, vol. 11, no. 9, p. 1066, Aug. 2019. B. R. Vasconcellos, M. Rudek, and M. de Souza, “A Machine Learning Method for Vehicle Classification by Inductive Waveform Analysis,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 13928–13932, Jan. 2020. |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.license.spa.fl_str_mv |
Reconocimiento 4.0 Internacional |
dc.rights.uri.spa.fl_str_mv |
http://creativecommons.org/licenses/by/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Reconocimiento 4.0 Internacional http://creativecommons.org/licenses/by/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
xvi, 70 páginas |
dc.format.mimetype.spa.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
Universidad Nacional de Colombia |
dc.publisher.program.spa.fl_str_mv |
Bogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y Computación |
dc.publisher.department.spa.fl_str_mv |
Departamento de Ingeniería de Sistemas e Industrial |
dc.publisher.faculty.spa.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.place.spa.fl_str_mv |
Bogotá, Colombia |
dc.publisher.branch.spa.fl_str_mv |
Universidad Nacional de Colombia - Sede Bogotá |
institution |
Universidad Nacional de Colombia |
bitstream.url.fl_str_mv |
https://repositorio.unal.edu.co/bitstream/unal/80979/3/1075654641.2021.pdf https://repositorio.unal.edu.co/bitstream/unal/80979/4/license.txt https://repositorio.unal.edu.co/bitstream/unal/80979/5/1075654641.2021.pdf.jpg |
bitstream.checksum.fl_str_mv |
39d112f7eb05309b165887fe6d41cc8c 8153f7789df02f0a4c9e079953658ab2 2359a8daf5ba0df04f84682077294535 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Institucional Universidad Nacional de Colombia |
repository.mail.fl_str_mv |
repositorio_nal@unal.edu.co |
_version_ |
1814089805933314048 |
spelling |
Reconocimiento 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Niño Vásquez, Luis Fernandobc784b82735e16fe53653c3f5c8f3bbeRoa García, Fabio Andrése4adc2133fe97beacb8261ba0183fd84laboratorio de Investigación en Sistemas Inteligentes Lisi2022-02-14T20:20:03Z2022-02-14T20:20:03Z2021-09-10https://repositorio.unal.edu.co/handle/unal/80979Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/ilustraciones, fotografías, gráficas, tablasEn el campo de la biometría y análisis de imágenes se han dado avances importantes en los últimos años, de esta manera, se han formalizado técnicas de reconocimiento facial mediante el uso de redes neuronales convolucionales apoyándose por algoritmos de transfer learning y clasificación. Estas técnicas en conjunto, se pueden aplicar al análisis de video, realizando una serie de pasos adicionales para optimizar los tiempos procesamiento y la precisión del modelo. El propósito de este trabajo es utilizar el modelo ResNet-34 junto con transfer Learning para el reconocimiento e identificación de rostros sobre secuencias de video. (Texto tomado de la fuente).Nowadays, thanks to technological innovation, it has been possible to obtain a significant increase in the production of multimedia content through devices such as tablet cell phones and computers. This increase in multimedia content for the most part is in video format and implies a need to find useful information about this type of format, but the resulting problem will be a tedious task since it is not possible to analyze useful information about the vídeos without it being in excessive use of resources and long execution times. Fortunately, in the field of biometrics and image analysis, there have been important advances in recent years, in this way, facial recognition techniques have been formalized through the use of convolutional neural networks supported by transfer learning and classification algorithms. Together, these techniques can be applied to video analysis, performing a series of additional steps to optimize processing times and model accuracy. The purpose of this work is to use the ResNet-34 model and Transfer Learning for face recognition and identification on video footage.Incluye anexosMaestríaMagíster en Ingeniería - Ingeniería de Sistemas y ComputaciónA continuación se realiza una descripción de las fases metodológicas aplicadas en el trabajo: Fase 1: Comprensión del negocio Esta fase se enfoca en comprender los objetivos del proyecto, definir los requisitos y convertirlos en la definición formal del problema. Fase 2: Comprensión de los Datos: Esta fase se centra en la recopilación de datos en bruto , teniendo como propósito la calidad de los mismos y la detección de subconjuntos de datos interesantes para la realización del proyecto. Fase 3: Preparación de los Datos: En esta fase se cubren todas las actividades relacionadas con la construcción del conjunto de datos final, estas actividades incluyen: Limpieza, transformación, discretización, reducción e ingeniería de características. Fase 4: Modelado: En esta fase se seleccionan y aplican los diferentes algoritmos y técnicas de modelado como son CNN y Transfer Learning Esta fase puede ser cíclica dependiendo de las técnicas seleccionadas, si esto es asi, la fase retorna a la fase anterior de preparación de datos y continua iterativamente, hasta que el conjunto de datos sea consecuente con los modelos aplicados. Fase 5: Evaluación: Esta fase se enfoca en la evaluación y validación de los modelos construidos, con el fin de medir la calidad y rendimiento de acuerdo a los requerimientos y objetivos del proyecto. Fase 6: Despliegue: En esta fase se implementa el producto final en una aplicación del mundo real junto con los entregables asociados a las fases anteriores, así como el informe final que consolide la especificación técnica, desarrollo del proyecto y los resultados obtenidosSistemas inteligentesxvi, 70 páginasapplication/pdfspaUniversidad Nacional de ColombiaBogotá - Ingeniería - Maestría en Ingeniería - Ingeniería de Sistemas y ComputaciónDepartamento de Ingeniería de Sistemas e IndustrialFacultad de IngenieríaBogotá, ColombiaUniversidad Nacional de Colombia - Sede Bogotá000 - Ciencias de la computación, información y obras generales::003 - SistemasNeural networks (Computer science)Machine learningOptical data processingRedes neuralesAprendizaje automático (Inteligencia artificial)Procesamiento óptico de datosCNNOpenCVDlibFace recognitionDeep learningTransfer learningDeep residual learningKNNAprendizaje profundoReconocimiento facialTransferencia de aprendizajeAprendizaje residual profundok vecinos más próximosRedes neuronales convolucionalesImplementar un sistema de reconocimiento e identificación de rostros sobre secuencias de video mediante un modelo de Redes Neuronales Convolucionales y Transfer LearningImplement a face recognition and identification system on video sequences through a model of Convolutional Neural Networks and Transfer LearningTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMM. Liu and Z. Liu, “Deep Reinforcement Learning Visual-Text Attention for Multimodal Video Classification,” in 1st International Workshop on Multimodal Understanding and Learning for Embodied Applications - {MULEA} ’19, pp. 13–21.S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10. pp. 1345–1359, Oct-2010.X. Ran, H. Chen, Z. Liu, and J. Chen, “Delivering Deep Learning to Mobile Devices via Offloading,” in Proceedings of the Workshop on Virtual Reality and Augmented Reality Network - {VR}/{AR} Network ’17, pp. 42–47.O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, and H. Arshad, “State-of-the-art in artificial neural network applications: A survey,” vol. 4, no. 11, p. e00938, 2018.G. Szirtes, D. Szolgay, Á. Utasi, D. Takács, I. Petrás, and G. Fodor, “Facing reality: an industrial view on large scale use of facial expression analysis,” in Proceedings of the 2013 on Emotion recognition in the wild challenge and workshop - {EmotiW} ’13, pp. 1–8.G. Levi and T. Hassner, “Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns,” in Proceedings of the 2015 {ACM} on International Conference on Multimodal Interaction - {ICMI} ’15, pp. 503–510.R. Ewerth, M. Mühling, and B. Freisleben, “Robust Video Content Analysis via Transductive Learning,” vol. 3, no. 3, pp. 1–26.M. Parchami, S. Bashbaghi, and E. Granger, “{CNNs} with cross-correlation matching for face recognition in video surveillance using a single training sample per person,” in 2017 14th {IEEE} International Conference on Advanced Video and Signal Based Surveillance ({AVSS}), pp. 1–6.H. Khan, A. Atwater, and U. Hengartner, “Itus: an implicit authentication framework for android,” in Proceedings of the 20th annual international conference on Mobile computing and networking - {MobiCom} ’14, pp. 507–518.L. N. Huynh, Y. Lee, and R. K. Balan, “DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications,” in Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pp. 82–95.R. Iqbal, F. Doctor, B. More, S. Mahmud, and U. Yousuf, “Big data analytics: Computational intelligence techniques and application areas,” Technol. Forecast. Soc. Change, vol. 153, p. 119253, 2020.U. Schmidt-Erfurth, A. Sadeghipour, B. S. Gerendas, S. M. Waldstein, and H. Bogunović, “Artificial intelligence in retina,” vol. 67, pp. 1–29.M. Mittal et al., “An efficient edge detection approach to provide better edge connectivity for image analysis,” IEEE Access, vol. 7, pp. 33240–33255, 2019.D. Sirohi, N. Kumar, and P. S. Rana, “Convolutional neural networks for 5G-enabled Intelligent Transportation System : A systematic review,” vol. 153, pp. 459–498.A. Kumar, A. Kaur, and M. Kumar, “Face detection techniques: a review,” Artif. Intell. Rev., vol. 52, no. 2, pp. 927–948, 2019.K. S. Gautam and S. K. Thangavel, “Video analytics-based intelligent surveillance system for smart buildings,” Soft Comput., vol. 23, no. 8, pp. 2813–2837, 2019.J. Yu, K. Sun, F. Gao, and S. Zhu, “Face biometric quality assessment via light CNN,” vol. 107, pp. 25–32.L. T. Nguyen-Meidine, E. Granger, M. Kiran, and L.-A. Blais-Morin, “A comparison of {CNN}-based face and head detectors for real-time video surveillance applications,” in 2017 Seventh International Conference on Image Processing Theory, Tools and Applications ({IPTA}), pp. 1–7.B. Chacua et al., “People Identification through Facial Recognition using Deep Learning,” in 2019 IEEE Latin American Conference on Computational Intelligence (LA-CCI), 2019, pp.J. Park, J. Chen, Y. K. Cho, D. Y. Kang, and B. J. Son, “CNN-based person detection using infrared images for night-time intrusion warning systems,” Sensors (Switzerland), vol. 20, no. 1, 2020.A. Bansal, C. Castillo, R. Ranjan, and R. Chellappa, “The Do’s and Don’ts for CNN-Based Face Verification,” in 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), 2017, pp. 2545–2554.J. Galbally, “A new Foe in biometrics: A narrative review of side-channel attacks,” vol. 96, p. 101902.Y. Yao, H. Li, H. Zheng, and B. Y. Zhao, “Latent Backdoor Attacks on Deep Neural Networks,” in Proceedings of the 2019 {ACM} {SIGSAC} Conference on Computer and Communications Security, pp. 2041–2055.Y. Akbulut, A. Sengur, U. Budak, and S. Ekici, “Deep learning based face liveness detection in vídeos,” in 2017 International Artificial Intelligence and Data Processing Symposium ({IDAP}), pp. 1–4.J. Zhang, W. Li, P. Ogunbona, and D. Xu, “Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective,” vol. 52, no. 1, pp. 1–38.C. X. Lu et al., “Autonomous Learning for Face Recognition in the Wild via Ambient Wireless Cues,” in The World Wide Web Conference on - {WWW} ’19, pp. 1175–1186.J. C. Hung, K.-C. Lin, and N.-X. Lai, “Recognizing learning emotion based on convolutional neural networks and transfer learning,” vol. 84, p. 105724.S. Zhang, X. Pan, Y. Cui, X. Zhao, and L. Liu, “Learning Affective Video Features for Facial Expression Recognition via Hybrid Deep Learning,” IEEE Access, vol. 7, pp. 32297–32304, 2019.C. Herrmann, T. Müller, D. Willersinn, and J. Beyerer, “Real-time person detection in low-resolution thermal infrared imagery with MSER and CNNs,” p. 99870I.F. An and Z. Liu, “Facial expression recognition algorithm based on parameter adaptive initialization of CNN and LSTM,” vol. 36, no. 3, pp. 483–498.Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Joint Face Representation Adaptation and Clustering in Vídeos,” in Computer Vision – {ECCV} 2016, vol. 9907, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Springer International Publishing, pp. 236–251.E. G. Ortiz, A. Wright, and M. Shah, “Face recognition in movie trailers via mean sequence sparse representation-based classification,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2013, pp. 3531–3538.“Privacy Protection for Life-log Video.” [Online]. Available: https://www.researchgate.net/publication/4249807_Privacy_Protection_for_Life-log_Video. [Accessed: 13-Jun-2021].SUPERINTENDENDENCIA DE INDUSTRIA Y COMERCIO, “Proteccion de datos personales en sistemas de videovigilancia,” 2016.S. Ebrahimi Kahou, V. Michalski, K. Konda, R. Memisevic, and C. Pal, “Recurrent Neural Networks for Emotion Recognition in Video,” in Proceedings of the 2015 {ACM} on International Conference on Multimodal Interaction - {ICMI} ’15, pp. 467–474.E. Flouty, O. Zisimopoulos, and D. Stoyanov, “FaceOff: Anonymizing Vídeos in the Operating Rooms,” in {OR} 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, vol. 11041, D. Stoyanov, Z. Taylor, D. Sarikaya, J. McLeod, M. A. González Ballester, N. C. F. Codella, A. Martel, L. Maier-Hein, A. Malpani, M. A. Zenati, S. De Ribaupierre, L. Xiongbiao, T. Collins, T. Reichl, K. Drechsler, M. Erdt, M. G. Linguraru, C. Oyarzun Laura, R. Shekhar, S. Wesarg, M. E. Celebi, K. Dana, and A. Halpern, Eds. Springer International Publishing, pp. 30–38.A. Turing, “Maquinaria computacional e Inteligencia Alan Turing, 1950,” 1950.G. R. Yang and X. J. Wang, “Artificial Neural Networks for Neuroscientists: A Primer,” Neuron, vol. 107, no. 6, pp. 1048–1070, Sep. 2020.J. Singh and R. Banerjee, “A Study on Single and Multi-layer Perceptron Neural Network,” in 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), 2019.I. G. and Y. B. and A. Courville, Deep Learning. 2016.E. Stevens, L. Antiga, and T. Viehmann, “Deep Learning with PyTorch.”K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.”M. KAYA and H. Ş. BİLGE, “Deep Metric Learning: A Survey,” Symmetry 2019, Vol. 11, Page 1066, vol. 11, no. 9, p. 1066, Aug. 2019.B. R. Vasconcellos, M. Rudek, and M. de Souza, “A Machine Learning Method for Vehicle Classification by Inductive Waveform Analysis,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 13928–13932, Jan. 2020.EstudiantesInvestigadoresMaestrosPúblico generalORIGINAL1075654641.2021.pdf1075654641.2021.pdfTesis de Maestría en Ingeniería de Sistemas y Computaciónapplication/pdf1078811https://repositorio.unal.edu.co/bitstream/unal/80979/3/1075654641.2021.pdf39d112f7eb05309b165887fe6d41cc8cMD53LICENSElicense.txtlicense.txttext/plain; charset=utf-84074https://repositorio.unal.edu.co/bitstream/unal/80979/4/license.txt8153f7789df02f0a4c9e079953658ab2MD54THUMBNAIL1075654641.2021.pdf.jpg1075654641.2021.pdf.jpgGenerated Thumbnailimage/jpeg5863https://repositorio.unal.edu.co/bitstream/unal/80979/5/1075654641.2021.pdf.jpg2359a8daf5ba0df04f84682077294535MD55unal/80979oai:repositorio.unal.edu.co:unal/809792024-08-02 23:11:02.979Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUExBTlRJTExBIERFUMOTU0lUTwoKQ29tbyBlZGl0b3IgZGUgZXN0ZSDDrXRlbSwgdXN0ZWQgcHVlZGUgbW92ZXJsbyBhIHJldmlzacOzbiBzaW4gYW50ZXMgcmVzb2x2ZXIgbG9zIHByb2JsZW1hcyBpZGVudGlmaWNhZG9zLCBkZSBsbyBjb250cmFyaW8sIGhhZ2EgY2xpYyBlbiBHdWFyZGFyIHBhcmEgZ3VhcmRhciBlbCDDrXRlbSB5IHNvbHVjaW9uYXIgZXN0b3MgcHJvYmxlbWFzIG1hcyB0YXJkZS4KClBhcmEgdHJhYmFqb3MgZGVwb3NpdGFkb3MgcG9yIHN1IHByb3BpbyBhdXRvcjoKIApBbCBhdXRvYXJjaGl2YXIgZXN0ZSBncnVwbyBkZSBhcmNoaXZvcyBkaWdpdGFsZXMgeSBzdXMgbWV0YWRhdG9zLCB5byBnYXJhbnRpem8gYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBVbmFsIGVsIGRlcmVjaG8gYSBhbG1hY2VuYXJsb3MgeSBtYW50ZW5lcmxvcyBkaXNwb25pYmxlcyBlbiBsw61uZWEgZGUgbWFuZXJhIGdyYXR1aXRhLiBEZWNsYXJvIHF1ZSBsYSBvYnJhIGVzIGRlIG1pIHByb3BpZWRhZCBpbnRlbGVjdHVhbCB5IHF1ZSBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIFVuYWwgbm8gYXN1bWUgbmluZ3VuYSByZXNwb25zYWJpbGlkYWQgc2kgaGF5IGFsZ3VuYSB2aW9sYWNpw7NuIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGFsIGRpc3RyaWJ1aXIgZXN0b3MgYXJjaGl2b3MgeSBtZXRhZGF0b3MuIChTZSByZWNvbWllbmRhIGEgdG9kb3MgbG9zIGF1dG9yZXMgYSBpbmRpY2FyIHN1cyBkZXJlY2hvcyBkZSBhdXRvciBlbiBsYSBww6FnaW5hIGRlIHTDrXR1bG8gZGUgc3UgZG9jdW1lbnRvLikgRGUgbGEgbWlzbWEgbWFuZXJhLCBhY2VwdG8gbG9zIHTDqXJtaW5vcyBkZSBsYSBzaWd1aWVudGUgbGljZW5jaWE6IExvcyBhdXRvcmVzIG8gdGl0dWxhcmVzIGRlbCBkZXJlY2hvIGRlIGF1dG9yIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gY29uZmllcmVuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgdW5hIGxpY2VuY2lhIG5vIGV4Y2x1c2l2YSwgbGltaXRhZGEgeSBncmF0dWl0YSBzb2JyZSBsYSBvYnJhIHF1ZSBzZSBpbnRlZ3JhIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwsIHF1ZSBzZSBhanVzdGEgYSBsYXMgc2lndWllbnRlcyBjYXJhY3RlcsOtc3RpY2FzOiBhKSBFc3RhcsOhIHZpZ2VudGUgYSBwYXJ0aXIgZGUgbGEgZmVjaGEgZW4gcXVlIHNlIGluY2x1eWUgZW4gZWwgcmVwb3NpdG9yaW8sIHF1ZSBzZXLDoW4gcHJvcnJvZ2FibGVzIGluZGVmaW5pZGFtZW50ZSBwb3IgZWwgdGllbXBvIHF1ZSBkdXJlIGVsIGRlcmVjaG8gcGF0cmltb25pYWwgZGVsIGF1dG9yLiBFbCBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgbGljZW5jaWEgc29saWNpdMOhbmRvbG8gYSBsYSBVbml2ZXJzaWRhZC4gYikgTG9zIGF1dG9yZXMgYXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBwdWJsaWNhciBsYSBvYnJhIGVuIGVsIGZvcm1hdG8gcXVlIGVsIHJlcG9zaXRvcmlvIGxvIHJlcXVpZXJhIChpbXByZXNvLCBkaWdpdGFsLCBlbGVjdHLDs25pY28gbyBjdWFscXVpZXIgb3RybyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSB5IGNvbm9jZW4gcXVlIGRhZG8gcXVlIHNlIHB1YmxpY2EgZW4gSW50ZXJuZXQgcG9yIGVzdGUgaGVjaG8gY2lyY3VsYSBjb24gYWxjYW5jZSBtdW5kaWFsLiBjKSBMb3MgYXV0b3JlcyBhY2VwdGFuIHF1ZSBsYSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgYSB0w610dWxvIGdyYXR1aXRvLCBwb3IgbG8gdGFudG8sIHJlbnVuY2lhbiBhIHJlY2liaXIgZW1vbHVtZW50byBhbGd1bm8gcG9yIGxhIHB1YmxpY2FjacOzbiwgZGlzdHJpYnVjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGN1YWxxdWllciBvdHJvIHVzbyBxdWUgc2UgaGFnYSBlbiBsb3MgdMOpcm1pbm9zIGRlIGxhIHByZXNlbnRlIGxpY2VuY2lhIHkgZGUgbGEgbGljZW5jaWEgQ3JlYXRpdmUgQ29tbW9ucyBjb24gcXVlIHNlIHB1YmxpY2EuIGQpIExvcyBhdXRvcmVzIG1hbmlmaWVzdGFuIHF1ZSBzZSB0cmF0YSBkZSB1bmEgb2JyYSBvcmlnaW5hbCBzb2JyZSBsYSBxdWUgdGllbmVuIGxvcyBkZXJlY2hvcyBxdWUgYXV0b3JpemFuIHkgcXVlIHNvbiBlbGxvcyBxdWllbmVzIGFzdW1lbiB0b3RhbCByZXNwb25zYWJpbGlkYWQgcG9yIGVsIGNvbnRlbmlkbyBkZSBzdSBvYnJhIGFudGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgeSBhbnRlIHRlcmNlcm9zLiBFbiB0b2RvIGNhc28gbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgc2UgY29tcHJvbWV0ZSBhIGluZGljYXIgc2llbXByZSBsYSBhdXRvcsOtYSBpbmNsdXllbmRvIGVsIG5vbWJyZSBkZWwgYXV0b3IgeSBsYSBmZWNoYSBkZSBwdWJsaWNhY2nDs24uIGUpIExvcyBhdXRvcmVzIGF1dG9yaXphbiBhIGxhIFVuaXZlcnNpZGFkIHBhcmEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcywgaW5kaWNlc3MgeSBidXNjYWRvcmVzIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBzdSBkaWZ1c2nDs24uIGYpIExvcyBhdXRvcmVzIGFjZXB0YW4gcXVlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHB1ZWRhIGNvbnZlcnRpciBlbCBkb2N1bWVudG8gYSBjdWFscXVpZXIgbWVkaW8gbyBmb3JtYXRvIHBhcmEgcHJvcMOzc2l0b3MgZGUgcHJlc2VydmFjacOzbiBkaWdpdGFsLiBTSSBFTCBET0NVTUVOVE8gU0UgQkFTQSBFTiBVTiBUUkFCQUpPIFFVRSBIQSBTSURPIFBBVFJPQ0lOQURPIE8gQVBPWUFETyBQT1IgVU5BIEFHRU5DSUEgTyBVTkEgT1JHQU5JWkFDScOTTiwgQ09OIEVYQ0VQQ0nDk04gREUgTEEgVU5JVkVSU0lEQUQgTkFDSU9OQUwgREUgQ09MT01CSUEsIExPUyBBVVRPUkVTIEdBUkFOVElaQU4gUVVFIFNFIEhBIENVTVBMSURPIENPTiBMT1MgREVSRUNIT1MgWSBPQkxJR0FDSU9ORVMgUkVRVUVSSURPUyBQT1IgRUwgUkVTUEVDVElWTyBDT05UUkFUTyBPIEFDVUVSRE8uIAoKUGFyYSB0cmFiYWpvcyBkZXBvc2l0YWRvcyBwb3Igb3RyYXMgcGVyc29uYXMgZGlzdGludGFzIGEgc3UgYXV0b3I6IAoKRGVjbGFybyBxdWUgZWwgZ3J1cG8gZGUgYXJjaGl2b3MgZGlnaXRhbGVzIHkgbWV0YWRhdG9zIGFzb2NpYWRvcyBxdWUgZXN0b3kgYXJjaGl2YW5kbyBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIFVOKSBlcyBkZSBkb21pbmlvIHDDumJsaWNvLiBTaSBubyBmdWVzZSBlbCBjYXNvLCBhY2VwdG8gdG9kYSBsYSByZXNwb25zYWJpbGlkYWQgcG9yIGN1YWxxdWllciBpbmZyYWNjacOzbiBkZSBkZXJlY2hvcyBkZSBhdXRvciBxdWUgY29ubGxldmUgbGEgZGlzdHJpYnVjacOzbiBkZSBlc3RvcyBhcmNoaXZvcyB5IG1ldGFkYXRvcy4KTk9UQTogU0kgTEEgVEVTSVMgQSBQVUJMSUNBUiBBRFFVSVJJw5MgQ09NUFJPTUlTT1MgREUgQ09ORklERU5DSUFMSURBRCBFTiBFTCBERVNBUlJPTExPIE8gUEFSVEVTIERFTCBET0NVTUVOVE8uIFNJR0EgTEEgRElSRUNUUklaIERFIExBIFJFU09MVUNJw5NOIDAyMyBERSAyMDE1LCBQT1IgTEEgQ1VBTCBTRSBFU1RBQkxFQ0UgRUwgUFJPQ0VESU1JRU5UTyBQQVJBIExBIFBVQkxJQ0FDScOTTiBERSBURVNJUyBERSBNQUVTVFLDjUEgWSBET0NUT1JBRE8gREUgTE9TIEVTVFVESUFOVEVTIERFIExBIFVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU4sIEVYUEVESURBIFBPUiBMQSBTRUNSRVRBUsONQSBHRU5FUkFMLiAqTEEgVEVTSVMgQSBQVUJMSUNBUiBERUJFIFNFUiBMQSBWRVJTScOTTiBGSU5BTCBBUFJPQkFEQS4gCgpBbCBoYWNlciBjbGljIGVuIGVsIHNpZ3VpZW50ZSBib3TDs24sIHVzdGVkIGluZGljYSBxdWUgZXN0w6EgZGUgYWN1ZXJkbyBjb24gZXN0b3MgdMOpcm1pbm9zLiBTaSB0aWVuZSBhbGd1bmEgZHVkYSBzb2JyZSBsYSBsaWNlbmNpYSwgcG9yIGZhdm9yLCBjb250YWN0ZSBjb24gZWwgYWRtaW5pc3RyYWRvciBkZWwgc2lzdGVtYS4KClVOSVZFUlNJREFEIE5BQ0lPTkFMIERFIENPTE9NQklBIC0gw5psdGltYSBtb2RpZmljYWNpw7NuIDE5LzEwLzIwMjEK |