Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS.
Este estudio aborda la vulnerabilidad de modelos de aprendizaje profundo a ataques de data poisoning, centrando el análisis en el modelo LUCID para la detección de ataques DDoS. Se emplearon técnicas de modificación de tráfico de red para explorar cómo la manipulación de datos puede influir en la ca...
- Autores:
-
Castaño Lozano, Juan Felipe
Guillén Fonseca, Sergio Andrés
- Tipo de recurso:
- Trabajo de grado de pregrado
- Fecha de publicación:
- 2024
- Institución:
- Universidad de los Andes
- Repositorio:
- Séneca: repositorio Uniandes
- Idioma:
- spa
- OAI Identifier:
- oai:repositorio.uniandes.edu.co:1992/74780
- Acceso en línea:
- https://hdl.handle.net/1992/74780
- Palabra clave:
- Ataques DDoS
Data poisoning
Aprendizaje profundo
Modelo LUCID
Seguridad cibernética
DDoS attacks
Deep learning
LUCID model
Cyber security
Ingeniería
- Rights
- openAccess
- License
- Attribution-NonCommercial-NoDerivatives 4.0 International
id |
UNIANDES2_8bf311a35254e84b695e8548d24e4d2d |
---|---|
oai_identifier_str |
oai:repositorio.uniandes.edu.co:1992/74780 |
network_acronym_str |
UNIANDES2 |
network_name_str |
Séneca: repositorio Uniandes |
repository_id_str |
|
dc.title.spa.fl_str_mv |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
title |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
spellingShingle |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. Ataques DDoS Data poisoning Aprendizaje profundo Modelo LUCID Seguridad cibernética DDoS attacks Deep learning LUCID model Cyber security Ingeniería |
title_short |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
title_full |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
title_fullStr |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
title_full_unstemmed |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
title_sort |
Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS. |
dc.creator.fl_str_mv |
Castaño Lozano, Juan Felipe Guillén Fonseca, Sergio Andrés |
dc.contributor.advisor.none.fl_str_mv |
Lozano Garzón, Carlos Andrés Montoya Orozco, Germán Adolfo |
dc.contributor.author.none.fl_str_mv |
Castaño Lozano, Juan Felipe Guillén Fonseca, Sergio Andrés |
dc.contributor.jury.none.fl_str_mv |
Lozano Garzón, Carlos Andrés Montoya Orozco, Germán Adolfo |
dc.subject.keyword.spa.fl_str_mv |
Ataques DDoS Data poisoning Aprendizaje profundo Modelo LUCID Seguridad cibernética |
topic |
Ataques DDoS Data poisoning Aprendizaje profundo Modelo LUCID Seguridad cibernética DDoS attacks Deep learning LUCID model Cyber security Ingeniería |
dc.subject.keyword.eng.fl_str_mv |
DDoS attacks Deep learning LUCID model |
dc.subject.keyword.none.fl_str_mv |
Cyber security |
dc.subject.themes.none.fl_str_mv |
Ingeniería |
description |
Este estudio aborda la vulnerabilidad de modelos de aprendizaje profundo a ataques de data poisoning, centrando el análisis en el modelo LUCID para la detección de ataques DDoS. Se emplearon técnicas de modificación de tráfico de red para explorar cómo la manipulación de datos puede influir en la capacidad del modelo de identificar tráfico legítimo y malicioso. A través de un enfoque experimental que incluyó pruebas de caja blanca y negra, se evaluaron los efectos de diferentes estrategias de envenenamiento sobre la precisión, sensibilidad y robustez del modelo. Los resultados revelan que, a pesar de la eficacia inicial de LUCID en la clasificación de tráfico, su rendimiento se ve comprometido significativamente bajo condiciones de datos envenenados, lo que destaca la importancia de desarrollar estrategias más sofisticadas para fortalecer la seguridad en sistemas de detección de DDoS. |
publishDate |
2024 |
dc.date.accessioned.none.fl_str_mv |
2024-07-30T14:18:38Z |
dc.date.available.none.fl_str_mv |
2024-07-30T14:18:38Z |
dc.date.issued.none.fl_str_mv |
2024-07-29 |
dc.type.none.fl_str_mv |
Trabajo de grado - Pregrado |
dc.type.driver.none.fl_str_mv |
info:eu-repo/semantics/bachelorThesis |
dc.type.version.none.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.coar.none.fl_str_mv |
http://purl.org/coar/resource_type/c_7a1f |
dc.type.content.none.fl_str_mv |
Text |
dc.type.redcol.none.fl_str_mv |
http://purl.org/redcol/resource_type/TP |
format |
http://purl.org/coar/resource_type/c_7a1f |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/1992/74780 |
dc.identifier.instname.none.fl_str_mv |
instname:Universidad de los Andes |
dc.identifier.reponame.none.fl_str_mv |
reponame:Repositorio Institucional Séneca |
dc.identifier.repourl.none.fl_str_mv |
repourl:https://repositorio.uniandes.edu.co/ |
url |
https://hdl.handle.net/1992/74780 |
identifier_str_mv |
instname:Universidad de los Andes reponame:Repositorio Institucional Séneca repourl:https://repositorio.uniandes.edu.co/ |
dc.language.iso.none.fl_str_mv |
spa |
language |
spa |
dc.relation.references.none.fl_str_mv |
[1] R. Doriguzzi-Corin, S. Millar, S. Scott-Hayward, J. Martínez-del-Rincón and D. Siracusa, "Lucid: A Practical, Lightweight Deep Learning Solution for DDoS Attack Detection," in IEEE Transactions on Network and Service Management, vol. 17, no. 2, pp. 876-889, June 2020, doi: 10.1109/TNSM.2020.2971776. [2] Chen, X., Liu, C., Li, B., Lu, K., & Song, D. (2017). Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv.Org. https://doi.org/10.48550/arxiv.1712.05526 [3] A. Aljuhani, "Machine Learning Approaches for Combating Distributed Denial of Service Attacks in Modern Networking Environments," in IEEE Access, vol. 9, pp. 42236-42264, 2021, doi: 10.1109/ACCESS.2021.3062909. keywords: {Denial-of-service attack;Cloud computing;Internet of Things;Cyberattack;Network function virtualization;Floods;Machine learning;DDoS attacks and detection;Internet of Things (IoT);machine learning (ML);network functions virtualization (NFV);software-defined network (SDN)}. [4] Satapathy, S. C., Joshi, A., Modi, N., & Pathak, N. (2016). UDP Flooding Attack Detection Using Information Metric Measure. In Proceedings of International Conference on ICT for Sustainable Development (Vol. 408, pp. 143–153). Springer Singapore Pte. Limited. https://doi.org/10.1007/978-981-10-0129-1_16 [5] Bhatia, S., Behal, S., & Ahmed, I. (2018). Distributed denial of service attacks and defense mechanisms: Current landscape and future directions. In Conti M., Somani G., Poovendran R. (Eds.). Versatile cybersecurity. Advances in information security (pp. 55-97). Cham, Switzerland: Springer. doi: 10.1007/978-3-319-97643-3_3 [6] Dang, V. T., Huong, T. T., Thanh, N. H., Nam, P. N., Thanh, N. N., & Marshall, A. (2019). SDN-Based SYN Proxy—A Solution to Enhance Performance of Attack Mitigation Under TCP SYN Flood. Computer Journal, 62(4), 518–534. https://doi.org/10.1093/comjnl/bxy117 [7] Mohammadi, R., Lal, C., & Conti, M. (2023). HTTPScout: A Machine Learning based Countermeasure for HTTP Flood Attacks in SDN. International Journal of Information Security, 22(2), 367–379. https://doi.org/10.1007/s10207-022-00641-3 [8] Alahmadi, A. A., Aljabri, M., Alhaidari, F., Alharthi, D. J., Rayani, G. E., Marghalani, L. A., Alotaibi, O. B., & Bajandouh, S. A. (2023). DDoS Attack Detection in IoT-Based Networks Using Machine Learning Models: A Survey and Research Directions. Electronics (Basel), 12(14), 3103-. https://doi.org/10.3390/electronics12143103 [9] Najafimehr, M., Zarifzadeh, S., & Mostafavi, S. (2023). DDoS attacks and machine‐learning‐based detection methods: A survey and taxonomy. Engineering Reports (Hoboken, N.J.), 5(12). https://doi.org/10.1002/eng2.12697 [10] Ramirez, Miguel A “New Data Poison Attacks on Machine Learning Classifiers for Mobile Exfiltration.” arXiv.Org, 2022, https://doi.org/10.48550/arxiv.2210.11592. [11] Taheri, R., Javidan, R., Shojafar, M., Pooranian, Z., Miri, A., & Conti, M. (2020). On defending against label flipping attacks on malware detection systems. Neural Computing & Applications, 32(18), 14781–14800. https://doi.org/10.1007/s00521-020-04831-9 [12] Zhang, H., Cheng, N., Zhang, Y., & Li, Z. (2021). Label flipping attacks against Naive Bayes on spam filtering systems. Applied Intelligence (Dordrecht, Netherlands), 51(7), 4503–4514. https://doi.org/10.1007/s10489-020-02086-4 [13] Shafahi, Ali, et al. “Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks.” arXiv.Org, 2018, https://doi.org/10.48550/arxiv.1804.00792. [14] Peri, N., Gupta, N., Huang, W. R., Fowl, L., Zhu, C., Feizi, S., Goldstein, T., & Dickerson, J. P. (2020). Deep k-NN Defense Against Clean-Label Data Poisoning Attacks. In Computer Vision – ECCV 2020 Workshops (pp. 55–70). Springer International Publishing. https://doi.org/10.1007/978-3-030-66415-2_4 [15] S. Ho, A. Reddy, S. Venkatesan, R. Izmailov, R. Chadha and A. Oprea, "Data Sanitization Approach to Mitigate Clean-Label Attacks Against Malware Detection Systems," MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM), Rockville, MD, USA, 2022, pp. 993-998, doi: 10.1109/MILCOM55135.2022.10017768. keywords: {Training;Military communication;Sensitivity;Art;Data integrity;Watermarking;Telecommunication traffic;Machine learning;Adversarial learning;Neural networks;Intrusion detection;Malware} [16] Marijan, D., Gotlieb, A., & Ahuja, M.K. (2019). Challenges of Testing Machine Learning Based Systems. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest), 101-102. [17] Castiglione, G., Ding, G.W., Hashemi, M., Srinivasa, C., & Wu, G. (2022). Scalable Whitebox Attacks on Tree-based Models. ArXiv, abs/2204.00103. [18] Corradini, D., Zampieri, A., Pasqua, M., Viglianisi, E., Dallago, M., & Ceccato, M. (2022). Automated black‐box testing of nominal and error scenarios in RESTful APIs. Software Testing, 32. [19] DDoS evaluation dataset (CIC-DDoS2019), University of New Brunswick est.1785, https://www.unb.ca/cic/datasets/ddos-2019.html. [20] Ataque DDoS de Inundación Syn | Cloudflare, https://www.cloudflare.com/es-es/learning/ddos/syn-flood-ddos-attack |
dc.rights.en.fl_str_mv |
Attribution-NonCommercial-NoDerivatives 4.0 International |
dc.rights.uri.none.fl_str_mv |
http://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.none.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.coar.none.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
rights_invalid_str_mv |
Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.none.fl_str_mv |
50 páginas |
dc.format.mimetype.none.fl_str_mv |
application/pdf |
dc.publisher.none.fl_str_mv |
Universidad de los Andes |
dc.publisher.program.none.fl_str_mv |
Ingeniería de Sistemas y Computación |
dc.publisher.faculty.none.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.department.none.fl_str_mv |
Departamento de Ingeniería de Sistemas y Computación |
publisher.none.fl_str_mv |
Universidad de los Andes |
institution |
Universidad de los Andes |
bitstream.url.fl_str_mv |
https://repositorio.uniandes.edu.co/bitstreams/c64505cf-30a1-443b-af39-09745d113271/download https://repositorio.uniandes.edu.co/bitstreams/e3d96040-ca0c-476c-9d8e-e838f93ab11c/download https://repositorio.uniandes.edu.co/bitstreams/ece73f01-e5ab-44c8-b6c7-a8df35873798/download https://repositorio.uniandes.edu.co/bitstreams/8e0d7724-0464-48d4-9365-fe00a4fd78c1/download https://repositorio.uniandes.edu.co/bitstreams/1892a7e3-6524-41f3-82b5-149c6d5c3c37/download https://repositorio.uniandes.edu.co/bitstreams/29c2e5c4-8e4c-4768-9641-93287d9b9969/download https://repositorio.uniandes.edu.co/bitstreams/f0e0b158-cd95-4b15-bc4a-6252ef6c2263/download https://repositorio.uniandes.edu.co/bitstreams/3d681303-ecfd-4ad5-9c0a-93ca0579d5e3/download |
bitstream.checksum.fl_str_mv |
42fa6f941a9438d76d28a6bf131a28fa cef11361974b6781cbd0522721ea45ce 4460e5956bc1d1639be9ae6146a50347 ae9e573a68e7f92501b6913cc846c39f 95cdb16fb37bb518f7d4be864b788828 f286cf9f131b37f3ac923c9c95796432 f9fd255feee46afc295d844e232cd759 858b7aab267a69d05c0e52de4cc8d3c5 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio institucional Séneca |
repository.mail.fl_str_mv |
adminrepositorio@uniandes.edu.co |
_version_ |
1812134063073394688 |
spelling |
Lozano Garzón, Carlos Andrésvirtual::19391-1Montoya Orozco, Germán Adolfovirtual::19394-1Castaño Lozano, Juan FelipeGuillén Fonseca, Sergio AndrésLozano Garzón, Carlos AndrésMontoya Orozco, Germán Adolfo2024-07-30T14:18:38Z2024-07-30T14:18:38Z2024-07-29https://hdl.handle.net/1992/74780instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/Este estudio aborda la vulnerabilidad de modelos de aprendizaje profundo a ataques de data poisoning, centrando el análisis en el modelo LUCID para la detección de ataques DDoS. Se emplearon técnicas de modificación de tráfico de red para explorar cómo la manipulación de datos puede influir en la capacidad del modelo de identificar tráfico legítimo y malicioso. A través de un enfoque experimental que incluyó pruebas de caja blanca y negra, se evaluaron los efectos de diferentes estrategias de envenenamiento sobre la precisión, sensibilidad y robustez del modelo. Los resultados revelan que, a pesar de la eficacia inicial de LUCID en la clasificación de tráfico, su rendimiento se ve comprometido significativamente bajo condiciones de datos envenenados, lo que destaca la importancia de desarrollar estrategias más sofisticadas para fortalecer la seguridad en sistemas de detección de DDoS.This study addresses the vulnerability of deep learning models to data poisoning attacks, focusing the analysis on the LUCID model for DDoS attack detection. Network traffic modification techniques were employed to explore how data manipulation can influence the model's ability to identify legitimate and malicious traffic. Through an experimental approach that included white and black box testing, the effects of different poisoning strategies on the accuracy, sensitivity and robustness of the model were evaluated. The results reveal that, despite LUCID's initial effectiveness in traffic classification, its performance is significantly compromised under poisoned data conditions, highlighting the importance of developing more sophisticated strategies to strengthen security in DDoS detection systems.Pregrado50 páginasapplication/pdfspaUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería de Sistemas y ComputaciónAttribution-NonCommercial-NoDerivatives 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Seguridad de modelos de aprendizaje profundo: Análisis de vulnerabilidad a ataques de data poisoning en el modelo LUCID para la detección de ataques DDoS.Trabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPAtaques DDoSData poisoningAprendizaje profundoModelo LUCIDSeguridad cibernéticaDDoS attacksDeep learningLUCID modelCyber securityIngeniería[1] R. Doriguzzi-Corin, S. Millar, S. Scott-Hayward, J. Martínez-del-Rincón and D. Siracusa, "Lucid: A Practical, Lightweight Deep Learning Solution for DDoS Attack Detection," in IEEE Transactions on Network and Service Management, vol. 17, no. 2, pp. 876-889, June 2020, doi: 10.1109/TNSM.2020.2971776.[2] Chen, X., Liu, C., Li, B., Lu, K., & Song, D. (2017). Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv.Org. https://doi.org/10.48550/arxiv.1712.05526[3] A. Aljuhani, "Machine Learning Approaches for Combating Distributed Denial of Service Attacks in Modern Networking Environments," in IEEE Access, vol. 9, pp. 42236-42264, 2021, doi: 10.1109/ACCESS.2021.3062909. keywords: {Denial-of-service attack;Cloud computing;Internet of Things;Cyberattack;Network function virtualization;Floods;Machine learning;DDoS attacks and detection;Internet of Things (IoT);machine learning (ML);network functions virtualization (NFV);software-defined network (SDN)}.[4] Satapathy, S. C., Joshi, A., Modi, N., & Pathak, N. (2016). UDP Flooding Attack Detection Using Information Metric Measure. In Proceedings of International Conference on ICT for Sustainable Development (Vol. 408, pp. 143–153). Springer Singapore Pte. Limited. https://doi.org/10.1007/978-981-10-0129-1_16[5] Bhatia, S., Behal, S., & Ahmed, I. (2018). Distributed denial of service attacks and defense mechanisms: Current landscape and future directions. In Conti M., Somani G., Poovendran R. (Eds.). Versatile cybersecurity. Advances in information security (pp. 55-97). Cham, Switzerland: Springer. doi: 10.1007/978-3-319-97643-3_3[6] Dang, V. T., Huong, T. T., Thanh, N. H., Nam, P. N., Thanh, N. N., & Marshall, A. (2019). SDN-Based SYN Proxy—A Solution to Enhance Performance of Attack Mitigation Under TCP SYN Flood. Computer Journal, 62(4), 518–534. https://doi.org/10.1093/comjnl/bxy117[7] Mohammadi, R., Lal, C., & Conti, M. (2023). HTTPScout: A Machine Learning based Countermeasure for HTTP Flood Attacks in SDN. International Journal of Information Security, 22(2), 367–379. https://doi.org/10.1007/s10207-022-00641-3[8] Alahmadi, A. A., Aljabri, M., Alhaidari, F., Alharthi, D. J., Rayani, G. E., Marghalani, L. A., Alotaibi, O. B., & Bajandouh, S. A. (2023). DDoS Attack Detection in IoT-Based Networks Using Machine Learning Models: A Survey and Research Directions. Electronics (Basel), 12(14), 3103-. https://doi.org/10.3390/electronics12143103[9] Najafimehr, M., Zarifzadeh, S., & Mostafavi, S. (2023). DDoS attacks and machine‐learning‐based detection methods: A survey and taxonomy. Engineering Reports (Hoboken, N.J.), 5(12). https://doi.org/10.1002/eng2.12697[10] Ramirez, Miguel A “New Data Poison Attacks on Machine Learning Classifiers for Mobile Exfiltration.” arXiv.Org, 2022, https://doi.org/10.48550/arxiv.2210.11592.[11] Taheri, R., Javidan, R., Shojafar, M., Pooranian, Z., Miri, A., & Conti, M. (2020). On defending against label flipping attacks on malware detection systems. Neural Computing & Applications, 32(18), 14781–14800. https://doi.org/10.1007/s00521-020-04831-9[12] Zhang, H., Cheng, N., Zhang, Y., & Li, Z. (2021). Label flipping attacks against Naive Bayes on spam filtering systems. Applied Intelligence (Dordrecht, Netherlands), 51(7), 4503–4514. https://doi.org/10.1007/s10489-020-02086-4[13] Shafahi, Ali, et al. “Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks.” arXiv.Org, 2018, https://doi.org/10.48550/arxiv.1804.00792.[14] Peri, N., Gupta, N., Huang, W. R., Fowl, L., Zhu, C., Feizi, S., Goldstein, T., & Dickerson, J. P. (2020). Deep k-NN Defense Against Clean-Label Data Poisoning Attacks. In Computer Vision – ECCV 2020 Workshops (pp. 55–70). Springer International Publishing. https://doi.org/10.1007/978-3-030-66415-2_4[15] S. Ho, A. Reddy, S. Venkatesan, R. Izmailov, R. Chadha and A. Oprea, "Data Sanitization Approach to Mitigate Clean-Label Attacks Against Malware Detection Systems," MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM), Rockville, MD, USA, 2022, pp. 993-998, doi: 10.1109/MILCOM55135.2022.10017768. keywords: {Training;Military communication;Sensitivity;Art;Data integrity;Watermarking;Telecommunication traffic;Machine learning;Adversarial learning;Neural networks;Intrusion detection;Malware}[16] Marijan, D., Gotlieb, A., & Ahuja, M.K. (2019). Challenges of Testing Machine Learning Based Systems. 2019 IEEE International Conference On Artificial Intelligence Testing (AITest), 101-102.[17] Castiglione, G., Ding, G.W., Hashemi, M., Srinivasa, C., & Wu, G. (2022). Scalable Whitebox Attacks on Tree-based Models. ArXiv, abs/2204.00103.[18] Corradini, D., Zampieri, A., Pasqua, M., Viglianisi, E., Dallago, M., & Ceccato, M. (2022). Automated black‐box testing of nominal and error scenarios in RESTful APIs. Software Testing, 32.[19] DDoS evaluation dataset (CIC-DDoS2019), University of New Brunswick est.1785, https://www.unb.ca/cic/datasets/ddos-2019.html.[20] Ataque DDoS de Inundación Syn | Cloudflare, https://www.cloudflare.com/es-es/learning/ddos/syn-flood-ddos-attack201820865201912757Publication9a0ca46c-ed4d-4da2-af46-db6aa9454a0dvirtual::19391-1a197a9f7-96e5-47cb-a497-2ee4c9cdce71virtual::19394-19a0ca46c-ed4d-4da2-af46-db6aa9454a0dvirtual::19391-1a197a9f7-96e5-47cb-a497-2ee4c9cdce71virtual::19394-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000219541virtual::19391-1ORIGINALSeguridad de Modelos de Aprendizaje Profundo.pdfSeguridad de Modelos de Aprendizaje Profundo.pdfapplication/pdf1305619https://repositorio.uniandes.edu.co/bitstreams/c64505cf-30a1-443b-af39-09745d113271/download42fa6f941a9438d76d28a6bf131a28faMD51autorizacionTesis Juan Felipe y Sergio.pdfautorizacionTesis Juan Felipe y Sergio.pdfHIDEapplication/pdf249508https://repositorio.uniandes.edu.co/bitstreams/e3d96040-ca0c-476c-9d8e-e838f93ab11c/downloadcef11361974b6781cbd0522721ea45ceMD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://repositorio.uniandes.edu.co/bitstreams/ece73f01-e5ab-44c8-b6c7-a8df35873798/download4460e5956bc1d1639be9ae6146a50347MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-82535https://repositorio.uniandes.edu.co/bitstreams/8e0d7724-0464-48d4-9365-fe00a4fd78c1/downloadae9e573a68e7f92501b6913cc846c39fMD54TEXTSeguridad de Modelos de Aprendizaje Profundo.pdf.txtSeguridad de Modelos de Aprendizaje Profundo.pdf.txtExtracted texttext/plain81375https://repositorio.uniandes.edu.co/bitstreams/1892a7e3-6524-41f3-82b5-149c6d5c3c37/download95cdb16fb37bb518f7d4be864b788828MD55autorizacionTesis Juan Felipe y Sergio.pdf.txtautorizacionTesis Juan Felipe y Sergio.pdf.txtExtracted texttext/plain1430https://repositorio.uniandes.edu.co/bitstreams/29c2e5c4-8e4c-4768-9641-93287d9b9969/downloadf286cf9f131b37f3ac923c9c95796432MD57THUMBNAILSeguridad de Modelos de Aprendizaje Profundo.pdf.jpgSeguridad de Modelos de Aprendizaje Profundo.pdf.jpgGenerated Thumbnailimage/jpeg9939https://repositorio.uniandes.edu.co/bitstreams/f0e0b158-cd95-4b15-bc4a-6252ef6c2263/downloadf9fd255feee46afc295d844e232cd759MD56autorizacionTesis Juan Felipe y Sergio.pdf.jpgautorizacionTesis Juan Felipe y Sergio.pdf.jpgGenerated Thumbnailimage/jpeg11449https://repositorio.uniandes.edu.co/bitstreams/3d681303-ecfd-4ad5-9c0a-93ca0579d5e3/download858b7aab267a69d05c0e52de4cc8d3c5MD581992/74780oai:repositorio.uniandes.edu.co:1992/747802024-09-12 16:20:17.473http://creativecommons.org/licenses/by-nc-nd/4.0/Attribution-NonCommercial-NoDerivatives 4.0 Internationalopen.accesshttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coPGgzPjxzdHJvbmc+RGVzY2FyZ28gZGUgUmVzcG9uc2FiaWxpZGFkIC0gTGljZW5jaWEgZGUgQXV0b3JpemFjacOzbjwvc3Ryb25nPjwvaDM+CjxwPjxzdHJvbmc+UG9yIGZhdm9yIGxlZXIgYXRlbnRhbWVudGUgZXN0ZSBkb2N1bWVudG8gcXVlIHBlcm1pdGUgYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBTw6luZWNhIHJlcHJvZHVjaXIgeSBkaXN0cmlidWlyIGxvcyByZWN1cnNvcyBkZSBpbmZvcm1hY2nDs24gZGVwb3NpdGFkb3MgbWVkaWFudGUgbGEgYXV0b3JpemFjacOzbiBkZSBsb3Mgc2lndWllbnRlcyB0w6lybWlub3M6PC9zdHJvbmc+PC9wPgo8cD5Db25jZWRhIGxhIGxpY2VuY2lhIGRlIGRlcMOzc2l0byBlc3TDoW5kYXIgc2VsZWNjaW9uYW5kbyBsYSBvcGNpw7NuIDxzdHJvbmc+J0FjZXB0YXIgbG9zIHTDqXJtaW5vcyBhbnRlcmlvcm1lbnRlIGRlc2NyaXRvcyc8L3N0cm9uZz4geSBjb250aW51YXIgZWwgcHJvY2VzbyBkZSBlbnbDrW8gbWVkaWFudGUgZWwgYm90w7NuIDxzdHJvbmc+J1NpZ3VpZW50ZScuPC9zdHJvbmc+PC9wPgo8aHI+CjxwPllvLCBlbiBtaSBjYWxpZGFkIGRlIGF1dG9yIGRlbCB0cmFiYWpvIGRlIHRlc2lzLCBtb25vZ3JhZsOtYSBvIHRyYWJham8gZGUgZ3JhZG8sIGhhZ28gZW50cmVnYSBkZWwgZWplbXBsYXIgcmVzcGVjdGl2byB5IGRlIHN1cyBhbmV4b3MgZGUgc2VyIGVsIGNhc28sIGVuIGZvcm1hdG8gZGlnaXRhbCB5L28gZWxlY3Ryw7NuaWNvIHkgYXV0b3Jpem8gYSBsYSBVbml2ZXJzaWRhZCBkZSBsb3MgQW5kZXMgcGFyYSBxdWUgcmVhbGljZSBsYSBwdWJsaWNhY2nDs24gZW4gZWwgU2lzdGVtYSBkZSBCaWJsaW90ZWNhcyBvIGVuIGN1YWxxdWllciBvdHJvIHNpc3RlbWEgbyBiYXNlIGRlIGRhdG9zIHByb3BpbyBvIGFqZW5vIGEgbGEgVW5pdmVyc2lkYWQgeSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGVuIHRvZGFzIHN1cyBmb3JtYXMsIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIHJlcHJvZHVjY2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EsIHRyYW5zZm9ybWFjacOzbiB5IGRpc3RyaWJ1Y2nDs24gKGFscXVpbGVyLCBwcsOpc3RhbW8gcMO6YmxpY28gZSBpbXBvcnRhY2nDs24pIHF1ZSBtZSBjb3JyZXNwb25kZW4gY29tbyBjcmVhZG9yIGRlIGxhIG9icmEgb2JqZXRvIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8uPC9wPgo8cD5MYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGVtaXRlIGVuIGNhbGlkYWQgZGUgYXV0b3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50byB5IG5vIGNvcnJlc3BvbmRlIGEgY2VzacOzbiBkZSBkZXJlY2hvcywgc2lubyBhIGxhIGF1dG9yaXphY2nDs24gZGUgdXNvIGFjYWTDqW1pY28gZGUgY29uZm9ybWlkYWQgY29uIGxvIGFudGVyaW9ybWVudGUgc2XDsWFsYWRvLiBMYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgZXh0ZW5zaXZhIG5vIHNvbG8gYSBsYXMgZmFjdWx0YWRlcyB5IGRlcmVjaG9zIGRlIHVzbyBzb2JyZSBsYSBvYnJhIGVuIGZvcm1hdG8gbyBzb3BvcnRlIG1hdGVyaWFsLCBzaW5vIHRhbWJpw6luIHBhcmEgZm9ybWF0byBlbGVjdHLDs25pY28sIHkgZW4gZ2VuZXJhbCBwYXJhIGN1YWxxdWllciBmb3JtYXRvIGNvbm9jaWRvIG8gcG9yIGNvbm9jZXIuPC9wPgo8cD5FbCBhdXRvciwgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuPC9wPgo8cD5FbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLjwvcD4KPHA+U2kgdGllbmUgYWxndW5hIGR1ZGEgc29icmUgbGEgbGljZW5jaWEsIHBvciBmYXZvciwgY29udGFjdGUgY29uIGVsIDxhIGhyZWY9Im1haWx0bzpiaWJsaW90ZWNhQHVuaWFuZGVzLmVkdS5jbyIgdGFyZ2V0PSJfYmxhbmsiPkFkbWluaXN0cmFkb3IgZGVsIFNpc3RlbWEuPC9hPjwvcD4K |