Optimización de hiperparámetros en la arquitectura Viton-GAN

El objetivo de este proyecto es encontrar la mejor configuración del modelo VITON-GAN a partir del uso de diferentes hiperparámetros. Para ello, se cambió el modelo pre-entrenado VGG19 (visual geometric group 19), el cual se usó para calcular la función de pérdida VGG, por los modelos VGG16 y ResNet...

Full description

Autores:
Flórez Castro, José Manuel
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2022
Institución:
Universidad de los Andes
Repositorio:
Séneca: repositorio Uniandes
Idioma:
spa
OAI Identifier:
oai:repositorio.uniandes.edu.co:1992/60742
Acceso en línea:
http://hdl.handle.net/1992/60742
Palabra clave:
Redes neuronales
Generative Adversarial Network - GAN
Unified Net - UNet
Redes convolucional deconvolucionales
Procesamiento de imágenes - Computer vision
Probador Virtual - Virtual Try-on
Ingeniería
Rights
openAccess
License
Atribución 4.0 Internacional
id UNIANDES2_a70256568483c0f90e0c7810b04c7263
oai_identifier_str oai:repositorio.uniandes.edu.co:1992/60742
network_acronym_str UNIANDES2
network_name_str Séneca: repositorio Uniandes
repository_id_str
dc.title.none.fl_str_mv Optimización de hiperparámetros en la arquitectura Viton-GAN
title Optimización de hiperparámetros en la arquitectura Viton-GAN
spellingShingle Optimización de hiperparámetros en la arquitectura Viton-GAN
Redes neuronales
Generative Adversarial Network - GAN
Unified Net - UNet
Redes convolucional deconvolucionales
Procesamiento de imágenes - Computer vision
Probador Virtual - Virtual Try-on
Ingeniería
title_short Optimización de hiperparámetros en la arquitectura Viton-GAN
title_full Optimización de hiperparámetros en la arquitectura Viton-GAN
title_fullStr Optimización de hiperparámetros en la arquitectura Viton-GAN
title_full_unstemmed Optimización de hiperparámetros en la arquitectura Viton-GAN
title_sort Optimización de hiperparámetros en la arquitectura Viton-GAN
dc.creator.fl_str_mv Flórez Castro, José Manuel
dc.contributor.advisor.none.fl_str_mv Takahashi Rodríguez, Silvia
dc.contributor.author.none.fl_str_mv Flórez Castro, José Manuel
dc.subject.keyword.none.fl_str_mv Redes neuronales
Generative Adversarial Network - GAN
Unified Net - UNet
Redes convolucional deconvolucionales
Procesamiento de imágenes - Computer vision
Probador Virtual - Virtual Try-on
topic Redes neuronales
Generative Adversarial Network - GAN
Unified Net - UNet
Redes convolucional deconvolucionales
Procesamiento de imágenes - Computer vision
Probador Virtual - Virtual Try-on
Ingeniería
dc.subject.themes.es_CO.fl_str_mv Ingeniería
description El objetivo de este proyecto es encontrar la mejor configuración del modelo VITON-GAN a partir del uso de diferentes hiperparámetros. Para ello, se cambió el modelo pre-entrenado VGG19 (visual geometric group 19), el cual se usó para calcular la función de pérdida VGG, por los modelos VGG16 y ResNet50. Además, se varió el hiperparámetro que se denomina pendiente negativa de la función de activación Leaky ReLU, de 0.2 a 0.1. Al entrenar modelo, se identificó que ResNet50 arrojó el resultado más óptimo a nivel cualitativo y cuantitativo.
publishDate 2022
dc.date.accessioned.none.fl_str_mv 2022-09-20T15:57:49Z
dc.date.available.none.fl_str_mv 2022-09-20T15:57:49Z
dc.date.issued.none.fl_str_mv 2022-09-12
dc.type.es_CO.fl_str_mv Trabajo de grado - Pregrado
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.version.none.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.content.es_CO.fl_str_mv Text
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/TP
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/1992/60742
dc.identifier.instname.es_CO.fl_str_mv instname:Universidad de los Andes
dc.identifier.reponame.es_CO.fl_str_mv reponame:Repositorio Institucional Séneca
dc.identifier.repourl.es_CO.fl_str_mv repourl:https://repositorio.uniandes.edu.co/
url http://hdl.handle.net/1992/60742
identifier_str_mv instname:Universidad de los Andes
reponame:Repositorio Institucional Séneca
repourl:https://repositorio.uniandes.edu.co/
dc.language.iso.es_CO.fl_str_mv spa
language spa
dc.relation.references.es_CO.fl_str_mv S. Montes, "El comercio electrónico en la región creció 66% en 2020 y llegó a US$66.765 millones," La República, Colombia, Mar. 29, 2021. Accessed: Jul. 09, 2021. [Online]. Available: https://www.larepublica.co/globoeconomia/el-e-commerce-en-latinoamerica-aumento-66-durante-2020-y-llego-a-us66765-millones-3145702
X. González, "E-commerce facturará US$5.386 millones al finalizar el año según informe de BlackSip," La República, Colombia, Mar. 29, 2021. Accessed: Jul. 09, 2021. [Online]. Available: https://www.larepublica.co/especiales/la-industria-del-e-commerce/e-commerce-facturara-us5386-millones-al-finalizar-el-ano-segun-informe-de-blacksip-3088455
L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool, "Pose Guided Person Image Generation," 2017, doi: 10.48550/ARXIV.1705.09368.
N. Jetchev and U. Bergmann, "The Conditional Analogy GAN: Swapping Fashion Articles on People Images," 2017, doi: 10.48550/ARXIV.1709.04695.
X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, "VITON: An Image-based Virtual Try-on Network," 2017, doi: 10.48550/ARXIV.1711.08447.
Y. Pozdniakov, "Changing clothing on people images using generative adversarial networks," Master of Science, Ukranian Catholic University, Lviv, 2020. [Online]. Available: http://www.er.ucu.edu.ua/bitstream/handle/1/1904/Pozdniakov%20-%20Changing%20Clothing%20on%20People%20Images.pdf?sequence=1&isAllowed=y
M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, "Are GANs Created Equal? A Large-Scale Study," 2017, doi: 10.48550/ARXIV.1711.10337.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," 2015, doi: 10.48550/ARXIV.1512.03385.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2015, doi: 10.48550/ARXIV.1512.00567.
K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," 2014, doi: 10.48550/ARXIV.1409.1556.
R. Mohammadi, "Transfer Learning-Based Automatic Detection of Coronavirus Disease 2019 (COVID-19) from Chest X-ray Images," J Biomed Phys Eng, vol. 10, no. 5, Oct. 2020, doi: 10.31661/jbpe.v0i0.2008-1153.
S. Honda, "VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss," Eurographics 2019 - Posters, p. 2 pages, 2019, doi: 10.2312/EGP.20191043.
B. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. Yang, "Toward Characteristic-Preserving Image-based Virtual Try-On Network," 2018, doi: 10.48550/ARXIV.1807.07688.
E. Metheniti, G. Neumann, and J. van Genabith, "Linguistically inspired morphological inflection with a sequence to sequence model," 2020, doi: 10.48550/ARXIV.2009.02073.
H. Noh, S. Hong, and B. Han, "Learning Deconvolution Network for Semantic Segmentation," 2015, doi: 10.48550/ARXIV.1505.04366.
S. Guan, N. Kamona, and M. Loew, "Segmentation of Thermal Breast Images Using Convolutional and Deconvolutional Neural Networks," in 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, Oct. 2018, pp. 1-7. doi: 10.1109/AIPR.2018.8707379.
A. Malekijoo and M. J. Fadaeieslam, "Convolution-deconvolution architecture with the pyramid pooling module for semantic segmentation," Multimed Tools Appl, vol. 78, no. 22, pp. 32379-32392, Nov. 2019, doi: 10.1007/s11042-019-07990-7.
I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to Sequence Learning with Neural Networks," 2014, doi: 10.48550/ARXIV.1409.3215.
A. Lou, S. Guan, N. Kamona, and M. Loew, "Segmentation of Infrared Breast Images Using MultiResUnet Neural Network," 2020, doi: 10.48550/ARXIV.2011.00376.
D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf, "Residual Conv-Deconv Grid Network for Semantic Segmentation," 2017, doi: 10.48550/ARXIV.1707.07958.
L. Mou and X. X. Zhu, "IM2HEIGHT: Height Estimation from Single Monocular Imagery via Fully Residual Convolutional-Deconvolutional Network," 2018, doi: 10.48550/ARXIV.1802.10249.
M. M. M. Islam and J.-M. Kim, "Vision-Based Autonomous Crack Detection of Concrete Structures Using a Fully Convolutional Encoder"-Decoder Network," Sensors, vol. 19, no. 19, p. 4251, Sep. 2019, doi: 10.3390/s19194251.
Z. Shou, J. Chan, A. Zareian, K. Miyazawa, and S.-F. Chang, "CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul. 2017, pp. 1417-1426. doi: 10.1109/CVPR.2017.155.
L. Ke, M.-C. Chang, H. Qi, and S. Lyu, "Multi-Scale Structure-Aware Network for Human Pose Estimation," 2018, doi: 10.48550/ARXIV.1803.09894.
C. Szegedy et al., "Going Deeper with Convolutions," 2014, doi: 10.48550/ARXIV.1409.4842.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Commun. ACM, vol. 60, no. 6, pp. 84-90, May 2017, doi: 10.1145/3065386.
O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," 2015, doi: 10.48550/ARXIV.1505.04597.
D. Rao, X.-J. Wu, H. Li, J. Kittler, and T. Xu, "UMFA: a photorealistic style transfer method based on U-Net and multi-layer feature aggregation," J. Electron. Imag., vol. 30, no. 05, Sep. 2021, doi: 10.1117/1.JEI.30.5.053013.
S. Jadon, "A survey of loss functions for semantic segmentation," in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Via del Mar, Chile, Oct. 2020, pp. 1-7. doi: 10.1109/CIBCB48159.2020.9277638.
A. Abu-Srhan, M. A. M. Abushariah, and O. S. Al-Kadi, "The effect of loss function on conditional generative adversarial networks," Journal of King Saud University - Computer and Information Sciences, p. S1319157822000519, Mar. 2022, doi: 10.1016/j.jksuci.2022.02.018.
A. R. Tej, S. S. Halder, A. P. Shandeelya, and V. Pankajakshan, "Enhancing Perceptual Loss with Adversarial Feature Matching for Super-Resolution," 2020, doi: 10.48550/ARXIV.2005.07502.
I. J. Goodfellow et al., "Generative Adversarial Networks," 2014, doi: 10.48550/ARXIV.1406.2661.
T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive Growing of GANs for Improved Quality, Stability, and Variation," 2017, doi: 10.48550/ARXIV.1710.10196.
X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, "Least Squares Generative Adversarial Networks," 2016, doi: 10.48550/ARXIV.1611.04076.
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," 2017, doi: 10.48550/ARXIV.1703.10593.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," 2016, doi: 10.48550/ARXIV.1611.07004.
L. A. Gatys, A. S. Ecker, and M. Bethge, "Image Style Transfer Using Convolutional Neural Networks," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2414-2423. doi: 10.1109/CVPR.2016.265.
K. He, X. Zhang, S. Ren, and J. Sun, "Identity Mappings in Deep Residual Networks," 2016, doi: 10.48550/ARXIV.1603.05027.
dc.rights.license.spa.fl_str_mv Atribución 4.0 Internacional
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.spa.fl_str_mv http://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv Atribución 4.0 Internacional
http://creativecommons.org/licenses/by/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.es_CO.fl_str_mv 14 páginas
dc.format.mimetype.es_CO.fl_str_mv application/pdf
dc.publisher.es_CO.fl_str_mv Universidad de los Andes
dc.publisher.program.es_CO.fl_str_mv Ingeniería de Sistemas y Computación
dc.publisher.faculty.es_CO.fl_str_mv Facultad de Ingeniería
dc.publisher.department.es_CO.fl_str_mv Departamento de Ingeniería Sistemas y Computación
institution Universidad de los Andes
bitstream.url.fl_str_mv https://repositorio.uniandes.edu.co/bitstreams/40bad651-471e-4daa-951b-d5b0432ac789/download
https://repositorio.uniandes.edu.co/bitstreams/9b2205c9-e415-422a-ac72-f2a2696e6766/download
https://repositorio.uniandes.edu.co/bitstreams/b3106947-8e7c-4ec1-828e-e8b44fad2abe/download
https://repositorio.uniandes.edu.co/bitstreams/a3e08b9b-3a00-4ae7-8ad7-7ce178c44112/download
https://repositorio.uniandes.edu.co/bitstreams/b5b5d04e-1f16-42c1-8a7e-d1cc4cd0d989/download
https://repositorio.uniandes.edu.co/bitstreams/8a28a789-b59b-4ea5-9e10-31aa83b02586/download
https://repositorio.uniandes.edu.co/bitstreams/56e4c09b-db36-444d-960f-ce71a7a2873d/download
https://repositorio.uniandes.edu.co/bitstreams/1f8d72ad-3f73-4091-8c1a-bae2bd8b12a5/download
bitstream.checksum.fl_str_mv 585692c3f13662b1bcb915039f4b8c77
4491fe1afb58beaaef41a73cf7ff2e27
c53d29e6a740252c325f2e15dcf793a0
8e346e4beb5b941143d7eec29590541f
5aa5c691a1ffe97abd12c2966efcb8d6
0175ea4a2d4caec4bbcc37e300941108
4085cf785ddda4f5d25240e50aa7a81c
c6330877add0fc7d471a479ac6a1d1b3
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio institucional Séneca
repository.mail.fl_str_mv adminrepositorio@uniandes.edu.co
_version_ 1812134006293004288
spelling Atribución 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Takahashi Rodríguez, Silviavirtual::13027-1Flórez Castro, José Manuel40bfba3c-b852-49f2-990e-0ba5f5cdb3b46002022-09-20T15:57:49Z2022-09-20T15:57:49Z2022-09-12http://hdl.handle.net/1992/60742instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/El objetivo de este proyecto es encontrar la mejor configuración del modelo VITON-GAN a partir del uso de diferentes hiperparámetros. Para ello, se cambió el modelo pre-entrenado VGG19 (visual geometric group 19), el cual se usó para calcular la función de pérdida VGG, por los modelos VGG16 y ResNet50. Además, se varió el hiperparámetro que se denomina pendiente negativa de la función de activación Leaky ReLU, de 0.2 a 0.1. Al entrenar modelo, se identificó que ResNet50 arrojó el resultado más óptimo a nivel cualitativo y cuantitativo.Ingeniero de Sistemas y ComputaciónPregrado14 páginasapplication/pdfspaUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería Sistemas y ComputaciónOptimización de hiperparámetros en la arquitectura Viton-GANTrabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPRedes neuronalesGenerative Adversarial Network - GANUnified Net - UNetRedes convolucional deconvolucionalesProcesamiento de imágenes - Computer visionProbador Virtual - Virtual Try-onIngenieríaS. Montes, "El comercio electrónico en la región creció 66% en 2020 y llegó a US$66.765 millones," La República, Colombia, Mar. 29, 2021. Accessed: Jul. 09, 2021. [Online]. Available: https://www.larepublica.co/globoeconomia/el-e-commerce-en-latinoamerica-aumento-66-durante-2020-y-llego-a-us66765-millones-3145702X. González, "E-commerce facturará US$5.386 millones al finalizar el año según informe de BlackSip," La República, Colombia, Mar. 29, 2021. Accessed: Jul. 09, 2021. [Online]. Available: https://www.larepublica.co/especiales/la-industria-del-e-commerce/e-commerce-facturara-us5386-millones-al-finalizar-el-ano-segun-informe-de-blacksip-3088455L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool, "Pose Guided Person Image Generation," 2017, doi: 10.48550/ARXIV.1705.09368.N. Jetchev and U. Bergmann, "The Conditional Analogy GAN: Swapping Fashion Articles on People Images," 2017, doi: 10.48550/ARXIV.1709.04695.X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, "VITON: An Image-based Virtual Try-on Network," 2017, doi: 10.48550/ARXIV.1711.08447.Y. Pozdniakov, "Changing clothing on people images using generative adversarial networks," Master of Science, Ukranian Catholic University, Lviv, 2020. [Online]. Available: http://www.er.ucu.edu.ua/bitstream/handle/1/1904/Pozdniakov%20-%20Changing%20Clothing%20on%20People%20Images.pdf?sequence=1&isAllowed=yM. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, "Are GANs Created Equal? A Large-Scale Study," 2017, doi: 10.48550/ARXIV.1711.10337.K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," 2015, doi: 10.48550/ARXIV.1512.03385.C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2015, doi: 10.48550/ARXIV.1512.00567.K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," 2014, doi: 10.48550/ARXIV.1409.1556.R. Mohammadi, "Transfer Learning-Based Automatic Detection of Coronavirus Disease 2019 (COVID-19) from Chest X-ray Images," J Biomed Phys Eng, vol. 10, no. 5, Oct. 2020, doi: 10.31661/jbpe.v0i0.2008-1153.S. Honda, "VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss," Eurographics 2019 - Posters, p. 2 pages, 2019, doi: 10.2312/EGP.20191043.B. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. Yang, "Toward Characteristic-Preserving Image-based Virtual Try-On Network," 2018, doi: 10.48550/ARXIV.1807.07688.E. Metheniti, G. Neumann, and J. van Genabith, "Linguistically inspired morphological inflection with a sequence to sequence model," 2020, doi: 10.48550/ARXIV.2009.02073.H. Noh, S. Hong, and B. Han, "Learning Deconvolution Network for Semantic Segmentation," 2015, doi: 10.48550/ARXIV.1505.04366.S. Guan, N. Kamona, and M. Loew, "Segmentation of Thermal Breast Images Using Convolutional and Deconvolutional Neural Networks," in 2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, Oct. 2018, pp. 1-7. doi: 10.1109/AIPR.2018.8707379.A. Malekijoo and M. J. Fadaeieslam, "Convolution-deconvolution architecture with the pyramid pooling module for semantic segmentation," Multimed Tools Appl, vol. 78, no. 22, pp. 32379-32392, Nov. 2019, doi: 10.1007/s11042-019-07990-7.I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to Sequence Learning with Neural Networks," 2014, doi: 10.48550/ARXIV.1409.3215.A. Lou, S. Guan, N. Kamona, and M. Loew, "Segmentation of Infrared Breast Images Using MultiResUnet Neural Network," 2020, doi: 10.48550/ARXIV.2011.00376.D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Tremeau, and C. Wolf, "Residual Conv-Deconv Grid Network for Semantic Segmentation," 2017, doi: 10.48550/ARXIV.1707.07958.L. Mou and X. X. Zhu, "IM2HEIGHT: Height Estimation from Single Monocular Imagery via Fully Residual Convolutional-Deconvolutional Network," 2018, doi: 10.48550/ARXIV.1802.10249.M. M. M. Islam and J.-M. Kim, "Vision-Based Autonomous Crack Detection of Concrete Structures Using a Fully Convolutional Encoder"-Decoder Network," Sensors, vol. 19, no. 19, p. 4251, Sep. 2019, doi: 10.3390/s19194251.Z. Shou, J. Chan, A. Zareian, K. Miyazawa, and S.-F. Chang, "CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul. 2017, pp. 1417-1426. doi: 10.1109/CVPR.2017.155.L. Ke, M.-C. Chang, H. Qi, and S. Lyu, "Multi-Scale Structure-Aware Network for Human Pose Estimation," 2018, doi: 10.48550/ARXIV.1803.09894.C. Szegedy et al., "Going Deeper with Convolutions," 2014, doi: 10.48550/ARXIV.1409.4842.A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Commun. ACM, vol. 60, no. 6, pp. 84-90, May 2017, doi: 10.1145/3065386.O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," 2015, doi: 10.48550/ARXIV.1505.04597.D. Rao, X.-J. Wu, H. Li, J. Kittler, and T. Xu, "UMFA: a photorealistic style transfer method based on U-Net and multi-layer feature aggregation," J. Electron. Imag., vol. 30, no. 05, Sep. 2021, doi: 10.1117/1.JEI.30.5.053013.S. Jadon, "A survey of loss functions for semantic segmentation," in 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Via del Mar, Chile, Oct. 2020, pp. 1-7. doi: 10.1109/CIBCB48159.2020.9277638.A. Abu-Srhan, M. A. M. Abushariah, and O. S. Al-Kadi, "The effect of loss function on conditional generative adversarial networks," Journal of King Saud University - Computer and Information Sciences, p. S1319157822000519, Mar. 2022, doi: 10.1016/j.jksuci.2022.02.018.A. R. Tej, S. S. Halder, A. P. Shandeelya, and V. Pankajakshan, "Enhancing Perceptual Loss with Adversarial Feature Matching for Super-Resolution," 2020, doi: 10.48550/ARXIV.2005.07502.I. J. Goodfellow et al., "Generative Adversarial Networks," 2014, doi: 10.48550/ARXIV.1406.2661.T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive Growing of GANs for Improved Quality, Stability, and Variation," 2017, doi: 10.48550/ARXIV.1710.10196.X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, "Least Squares Generative Adversarial Networks," 2016, doi: 10.48550/ARXIV.1611.04076.J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," 2017, doi: 10.48550/ARXIV.1703.10593.P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," 2016, doi: 10.48550/ARXIV.1611.07004.L. A. Gatys, A. S. Ecker, and M. Bethge, "Image Style Transfer Using Convolutional Neural Networks," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 2414-2423. doi: 10.1109/CVPR.2016.265.K. He, X. Zhang, S. Ren, and J. Sun, "Identity Mappings in Deep Residual Networks," 2016, doi: 10.48550/ARXIV.1603.05027.201632826Publicationhttps://scholar.google.es/citations?user=x7gjZ04AAAAJvirtual::13027-10000-0001-7971-8979virtual::13027-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000143898virtual::13027-17ab9a4e1-60f0-4e06-936b-39f2bf93d8a0virtual::13027-17ab9a4e1-60f0-4e06-936b-39f2bf93d8a0virtual::13027-1TEXTDocumento Tesis - Jose Manuel Florez - Optimizacion de hiperparametros en la arquitectura Viton-GAN.pdf.txtDocumento Tesis - Jose Manuel Florez - Optimizacion de hiperparametros en la arquitectura Viton-GAN.pdf.txtExtracted texttext/plain29968https://repositorio.uniandes.edu.co/bitstreams/40bad651-471e-4daa-951b-d5b0432ac789/download585692c3f13662b1bcb915039f4b8c77MD55firmado - FORMATO DE AUTORIZACION Y ENTREGA DE TESIS-TRABAJO DE GRADO AL SISTEMA DE BIBLIOTECAS - JOSE MANUEL FLOREZ CASTRO.pdf.txtfirmado - FORMATO DE AUTORIZACION Y ENTREGA DE TESIS-TRABAJO DE GRADO AL SISTEMA DE BIBLIOTECAS - JOSE MANUEL FLOREZ CASTRO.pdf.txtExtracted texttext/plain1163https://repositorio.uniandes.edu.co/bitstreams/9b2205c9-e415-422a-ac72-f2a2696e6766/download4491fe1afb58beaaef41a73cf7ff2e27MD57ORIGINALDocumento Tesis - Jose Manuel Florez - Optimizacion de hiperparametros en la arquitectura Viton-GAN.pdfDocumento Tesis - Jose Manuel Florez - Optimizacion de hiperparametros en la arquitectura Viton-GAN.pdfapplication/pdf765636https://repositorio.uniandes.edu.co/bitstreams/b3106947-8e7c-4ec1-828e-e8b44fad2abe/downloadc53d29e6a740252c325f2e15dcf793a0MD54firmado - FORMATO DE AUTORIZACION Y ENTREGA DE TESIS-TRABAJO DE GRADO AL SISTEMA DE BIBLIOTECAS - JOSE MANUEL FLOREZ CASTRO.pdffirmado - FORMATO DE AUTORIZACION Y ENTREGA DE TESIS-TRABAJO DE GRADO AL SISTEMA DE BIBLIOTECAS - JOSE MANUEL FLOREZ CASTRO.pdfHIDEapplication/pdf207575https://repositorio.uniandes.edu.co/bitstreams/a3e08b9b-3a00-4ae7-8ad7-7ce178c44112/download8e346e4beb5b941143d7eec29590541fMD53LICENSElicense.txtlicense.txttext/plain; charset=utf-81810https://repositorio.uniandes.edu.co/bitstreams/b5b5d04e-1f16-42c1-8a7e-d1cc4cd0d989/download5aa5c691a1ffe97abd12c2966efcb8d6MD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8908https://repositorio.uniandes.edu.co/bitstreams/8a28a789-b59b-4ea5-9e10-31aa83b02586/download0175ea4a2d4caec4bbcc37e300941108MD52THUMBNAILDocumento Tesis - Jose Manuel Florez - Optimizacion de hiperparametros en la arquitectura Viton-GAN.pdf.jpgDocumento Tesis - Jose Manuel Florez - Optimizacion de hiperparametros en la arquitectura Viton-GAN.pdf.jpgIM Thumbnailimage/jpeg4344https://repositorio.uniandes.edu.co/bitstreams/56e4c09b-db36-444d-960f-ce71a7a2873d/download4085cf785ddda4f5d25240e50aa7a81cMD56firmado - FORMATO DE AUTORIZACION Y ENTREGA DE TESIS-TRABAJO DE GRADO AL SISTEMA DE BIBLIOTECAS - JOSE MANUEL FLOREZ CASTRO.pdf.jpgfirmado - FORMATO DE AUTORIZACION Y ENTREGA DE TESIS-TRABAJO DE GRADO AL SISTEMA DE BIBLIOTECAS - JOSE MANUEL FLOREZ CASTRO.pdf.jpgIM Thumbnailimage/jpeg16119https://repositorio.uniandes.edu.co/bitstreams/1f8d72ad-3f73-4091-8c1a-bae2bd8b12a5/downloadc6330877add0fc7d471a479ac6a1d1b3MD581992/60742oai:repositorio.uniandes.edu.co:1992/607422024-03-13 14:50:10.75http://creativecommons.org/licenses/by/4.0/open.accesshttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coWW8sIGVuIG1pIGNhbGlkYWQgZGUgYXV0b3IgZGVsIHRyYWJham8gZGUgdGVzaXMsIG1vbm9ncmFmw61hIG8gdHJhYmFqbyBkZSBncmFkbywgaGFnbyBlbnRyZWdhIGRlbCBlamVtcGxhciByZXNwZWN0aXZvIHkgZGUgc3VzIGFuZXhvcyBkZSBzZXIgZWwgY2FzbywgZW4gZm9ybWF0byBkaWdpdGFsIHkvbyBlbGVjdHLDs25pY28geSBhdXRvcml6byBhIGxhIFVuaXZlcnNpZGFkIGRlIGxvcyBBbmRlcyBwYXJhIHF1ZSByZWFsaWNlIGxhIHB1YmxpY2FjacOzbiBlbiBlbCBTaXN0ZW1hIGRlIEJpYmxpb3RlY2FzIG8gZW4gY3VhbHF1aWVyIG90cm8gc2lzdGVtYSBvIGJhc2UgZGUgZGF0b3MgcHJvcGlvIG8gYWplbm8gYSBsYSBVbml2ZXJzaWRhZCB5IHBhcmEgcXVlIGVuIGxvcyB0w6lybWlub3MgZXN0YWJsZWNpZG9zIGVuIGxhIExleSAyMyBkZSAxOTgyLCBMZXkgNDQgZGUgMTk5MywgRGVjaXNpw7NuIEFuZGluYSAzNTEgZGUgMTk5MywgRGVjcmV0byA0NjAgZGUgMTk5NSB5IGRlbcOhcyBub3JtYXMgZ2VuZXJhbGVzIHNvYnJlIGxhIG1hdGVyaWEsIHV0aWxpY2UgZW4gdG9kYXMgc3VzIGZvcm1hcywgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgcmVwcm9kdWNjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSwgdHJhbnNmb3JtYWNpw7NuIHkgZGlzdHJpYnVjacOzbiAoYWxxdWlsZXIsIHByw6lzdGFtbyBww7pibGljbyBlIGltcG9ydGFjacOzbikgcXVlIG1lIGNvcnJlc3BvbmRlbiBjb21vIGNyZWFkb3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50by4gIAoKCkxhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gc2UgZW1pdGUgZW4gY2FsaWRhZCBkZSBhdXRvciBkZSBsYSBvYnJhIG9iamV0byBkZWwgcHJlc2VudGUgZG9jdW1lbnRvIHkgbm8gY29ycmVzcG9uZGUgYSBjZXNpw7NuIGRlIGRlcmVjaG9zLCBzaW5vIGEgbGEgYXV0b3JpemFjacOzbiBkZSB1c28gYWNhZMOpbWljbyBkZSBjb25mb3JtaWRhZCBjb24gbG8gYW50ZXJpb3JtZW50ZSBzZcOxYWxhZG8uIExhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gc2UgaGFjZSBleHRlbnNpdmEgbm8gc29sbyBhIGxhcyBmYWN1bHRhZGVzIHkgZGVyZWNob3MgZGUgdXNvIHNvYnJlIGxhIG9icmEgZW4gZm9ybWF0byBvIHNvcG9ydGUgbWF0ZXJpYWwsIHNpbm8gdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGVsZWN0csOzbmljbywgeSBlbiBnZW5lcmFsIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gCgoKRWwgYXV0b3IsIG1hbmlmaWVzdGEgcXVlIGxhIG9icmEgb2JqZXRvIGRlIGxhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gZXMgb3JpZ2luYWwgeSBsYSByZWFsaXrDsyBzaW4gdmlvbGFyIG8gdXN1cnBhciBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvcywgcG9yIGxvIHRhbnRvLCBsYSBvYnJhIGVzIGRlIHN1IGV4Y2x1c2l2YSBhdXRvcsOtYSB5IHRpZW5lIGxhIHRpdHVsYXJpZGFkIHNvYnJlIGxhIG1pc21hLiAKCgpFbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLiAKCg==